00:00:00.001 Started by upstream project "autotest-nightly" build number 3910 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3288 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.200 > git --version # 'git version 2.39.2' 00:00:00.200 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.236 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.236 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.401 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.415 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.428 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:04.428 > git config core.sparsecheckout # timeout=10 00:00:04.438 > git read-tree -mu HEAD # timeout=10 00:00:04.454 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:04.469 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:04.470 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.591 [Pipeline] Start of Pipeline 00:00:04.602 [Pipeline] library 00:00:04.604 Loading library shm_lib@master 00:00:04.604 Library shm_lib@master is cached. Copying from home. 00:00:04.620 [Pipeline] node 00:00:19.622 Still waiting to schedule task 00:00:19.622 Waiting for next available executor on ‘vagrant-vm-host’ 00:09:56.716 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:56.718 [Pipeline] { 00:09:56.733 [Pipeline] catchError 00:09:56.736 [Pipeline] { 00:09:56.752 [Pipeline] wrap 00:09:56.762 [Pipeline] { 00:09:56.769 [Pipeline] stage 00:09:56.771 [Pipeline] { (Prologue) 00:09:56.788 [Pipeline] echo 00:09:56.789 Node: VM-host-SM4 00:09:56.794 [Pipeline] cleanWs 00:09:56.802 [WS-CLEANUP] Deleting project workspace... 00:09:56.802 [WS-CLEANUP] Deferred wipeout is used... 00:09:56.808 [WS-CLEANUP] done 00:09:57.016 [Pipeline] setCustomBuildProperty 00:09:57.167 [Pipeline] httpRequest 00:09:57.189 [Pipeline] echo 00:09:57.190 Sorcerer 10.211.164.101 is alive 00:09:57.197 [Pipeline] httpRequest 00:09:57.200 HttpMethod: GET 00:09:57.200 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:09:57.201 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:09:57.202 Response Code: HTTP/1.1 200 OK 00:09:57.203 Success: Status code 200 is in the accepted range: 200,404 00:09:57.203 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:09:57.348 [Pipeline] sh 00:09:57.626 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:09:57.642 [Pipeline] httpRequest 00:09:57.661 [Pipeline] echo 00:09:57.662 Sorcerer 10.211.164.101 is alive 00:09:57.672 [Pipeline] httpRequest 00:09:57.676 HttpMethod: GET 00:09:57.677 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:09:57.677 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:09:57.679 Response Code: HTTP/1.1 200 OK 00:09:57.679 Success: Status code 200 is in the accepted range: 200,404 00:09:57.680 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:09:59.821 [Pipeline] sh 00:10:00.098 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:10:02.722 [Pipeline] sh 00:10:03.002 + git -C spdk log --oneline -n5 00:10:03.002 f7b31b2b9 log: declare g_deprecation_epoch static 00:10:03.002 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:10:03.002 3731556bd lvol: declare g_lvol_if static 00:10:03.002 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:10:03.002 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:10:03.020 [Pipeline] writeFile 00:10:03.034 [Pipeline] sh 00:10:03.310 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:10:03.321 [Pipeline] sh 00:10:03.659 + cat autorun-spdk.conf 00:10:03.659 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:03.659 SPDK_TEST_NVMF=1 00:10:03.659 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:03.659 SPDK_TEST_URING=1 00:10:03.659 SPDK_TEST_VFIOUSER=1 00:10:03.659 SPDK_TEST_USDT=1 00:10:03.659 SPDK_RUN_ASAN=1 00:10:03.659 SPDK_RUN_UBSAN=1 00:10:03.659 NET_TYPE=virt 00:10:03.659 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:03.725 RUN_NIGHTLY=1 00:10:03.727 [Pipeline] } 00:10:03.748 [Pipeline] // stage 00:10:03.762 [Pipeline] stage 00:10:03.764 [Pipeline] { (Run VM) 00:10:03.779 [Pipeline] sh 00:10:04.058 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:10:04.058 + echo 'Start stage prepare_nvme.sh' 00:10:04.058 Start stage prepare_nvme.sh 00:10:04.058 + [[ -n 5 ]] 00:10:04.058 + disk_prefix=ex5 00:10:04.058 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:10:04.058 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:10:04.058 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:10:04.058 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:04.058 ++ SPDK_TEST_NVMF=1 00:10:04.058 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:04.058 ++ SPDK_TEST_URING=1 00:10:04.058 ++ SPDK_TEST_VFIOUSER=1 00:10:04.058 ++ SPDK_TEST_USDT=1 00:10:04.058 ++ SPDK_RUN_ASAN=1 00:10:04.058 ++ SPDK_RUN_UBSAN=1 00:10:04.058 ++ NET_TYPE=virt 00:10:04.058 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:04.058 ++ RUN_NIGHTLY=1 00:10:04.058 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:10:04.058 + nvme_files=() 00:10:04.058 + declare -A nvme_files 00:10:04.058 + backend_dir=/var/lib/libvirt/images/backends 00:10:04.058 + nvme_files['nvme.img']=5G 00:10:04.058 + nvme_files['nvme-cmb.img']=5G 00:10:04.058 + nvme_files['nvme-multi0.img']=4G 00:10:04.058 + nvme_files['nvme-multi1.img']=4G 00:10:04.058 + nvme_files['nvme-multi2.img']=4G 00:10:04.058 + nvme_files['nvme-openstack.img']=8G 00:10:04.058 + nvme_files['nvme-zns.img']=5G 00:10:04.058 + (( SPDK_TEST_NVME_PMR == 1 )) 00:10:04.058 + (( SPDK_TEST_FTL == 1 )) 00:10:04.058 + (( SPDK_TEST_NVME_FDP == 1 )) 00:10:04.058 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:10:04.058 + for nvme in "${!nvme_files[@]}" 00:10:04.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:10:04.058 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:10:04.058 + for nvme in "${!nvme_files[@]}" 00:10:04.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:10:04.058 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:10:04.058 + for nvme in "${!nvme_files[@]}" 00:10:04.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:10:04.316 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:10:04.316 + for nvme in "${!nvme_files[@]}" 00:10:04.316 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:10:04.316 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:10:04.316 + for nvme in "${!nvme_files[@]}" 00:10:04.316 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:10:04.316 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:10:04.316 + for nvme in "${!nvme_files[@]}" 00:10:04.316 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:10:04.316 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:10:04.574 + for nvme in "${!nvme_files[@]}" 00:10:04.574 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:10:04.574 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:10:04.574 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:10:04.574 + echo 'End stage prepare_nvme.sh' 00:10:04.574 End stage prepare_nvme.sh 00:10:04.586 [Pipeline] sh 00:10:04.866 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:10:04.866 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:10:04.866 00:10:04.866 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:10:04.866 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:10:04.866 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:10:04.866 HELP=0 00:10:04.866 DRY_RUN=0 00:10:04.866 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:10:04.866 NVME_DISKS_TYPE=nvme,nvme, 00:10:04.866 NVME_AUTO_CREATE=0 00:10:04.866 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:10:04.866 NVME_CMB=,, 00:10:04.866 NVME_PMR=,, 00:10:04.866 NVME_ZNS=,, 00:10:04.866 NVME_MS=,, 00:10:04.866 NVME_FDP=,, 00:10:04.866 SPDK_VAGRANT_DISTRO=fedora38 00:10:04.866 SPDK_VAGRANT_VMCPU=10 00:10:04.866 SPDK_VAGRANT_VMRAM=12288 00:10:04.866 SPDK_VAGRANT_PROVIDER=libvirt 00:10:04.866 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:10:04.866 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:10:04.866 SPDK_OPENSTACK_NETWORK=0 00:10:04.866 VAGRANT_PACKAGE_BOX=0 00:10:04.866 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:10:04.866 FORCE_DISTRO=true 00:10:04.866 VAGRANT_BOX_VERSION= 00:10:04.866 EXTRA_VAGRANTFILES= 00:10:04.866 NIC_MODEL=e1000 00:10:04.866 00:10:04.866 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:10:04.866 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:10:09.048 Bringing machine 'default' up with 'libvirt' provider... 00:10:09.350 ==> default: Creating image (snapshot of base box volume). 00:10:09.350 ==> default: Creating domain with the following settings... 00:10:09.350 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721666890_043356235c928c0dd085 00:10:09.350 ==> default: -- Domain type: kvm 00:10:09.350 ==> default: -- Cpus: 10 00:10:09.350 ==> default: -- Feature: acpi 00:10:09.350 ==> default: -- Feature: apic 00:10:09.350 ==> default: -- Feature: pae 00:10:09.350 ==> default: -- Memory: 12288M 00:10:09.350 ==> default: -- Memory Backing: hugepages: 00:10:09.350 ==> default: -- Management MAC: 00:10:09.350 ==> default: -- Loader: 00:10:09.350 ==> default: -- Nvram: 00:10:09.350 ==> default: -- Base box: spdk/fedora38 00:10:09.350 ==> default: -- Storage pool: default 00:10:09.350 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721666890_043356235c928c0dd085.img (20G) 00:10:09.350 ==> default: -- Volume Cache: default 00:10:09.350 ==> default: -- Kernel: 00:10:09.350 ==> default: -- Initrd: 00:10:09.350 ==> default: -- Graphics Type: vnc 00:10:09.350 ==> default: -- Graphics Port: -1 00:10:09.350 ==> default: -- Graphics IP: 127.0.0.1 00:10:09.350 ==> default: -- Graphics Password: Not defined 00:10:09.350 ==> default: -- Video Type: cirrus 00:10:09.350 ==> default: -- Video VRAM: 9216 00:10:09.350 ==> default: -- Sound Type: 00:10:09.350 ==> default: -- Keymap: en-us 00:10:09.350 ==> default: -- TPM Path: 00:10:09.350 ==> default: -- INPUT: type=mouse, bus=ps2 00:10:09.350 ==> default: -- Command line args: 00:10:09.350 ==> default: -> value=-device, 00:10:09.350 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:10:09.350 ==> default: -> value=-drive, 00:10:09.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:10:09.350 ==> default: -> value=-device, 00:10:09.350 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.350 ==> default: -> value=-device, 00:10:09.350 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:10:09.350 ==> default: -> value=-drive, 00:10:09.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:10:09.350 ==> default: -> value=-device, 00:10:09.350 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.350 ==> default: -> value=-drive, 00:10:09.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:10:09.350 ==> default: -> value=-device, 00:10:09.350 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.350 ==> default: -> value=-drive, 00:10:09.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:10:09.350 ==> default: -> value=-device, 00:10:09.350 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.608 ==> default: Creating shared folders metadata... 00:10:09.608 ==> default: Starting domain. 00:10:10.983 ==> default: Waiting for domain to get an IP address... 00:10:29.108 ==> default: Waiting for SSH to become available... 00:10:29.108 ==> default: Configuring and enabling network interfaces... 00:10:33.323 default: SSH address: 192.168.121.240:22 00:10:33.323 default: SSH username: vagrant 00:10:33.323 default: SSH auth method: private key 00:10:35.853 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:10:43.960 ==> default: Mounting SSHFS shared folder... 00:10:45.337 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:10:45.337 ==> default: Checking Mount.. 00:10:46.776 ==> default: Folder Successfully Mounted! 00:10:46.776 ==> default: Running provisioner: file... 00:10:47.709 default: ~/.gitconfig => .gitconfig 00:10:47.967 00:10:47.967 SUCCESS! 00:10:47.967 00:10:47.967 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:10:47.967 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:10:47.967 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:10:47.967 00:10:47.976 [Pipeline] } 00:10:47.996 [Pipeline] // stage 00:10:48.006 [Pipeline] dir 00:10:48.007 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:10:48.009 [Pipeline] { 00:10:48.024 [Pipeline] catchError 00:10:48.026 [Pipeline] { 00:10:48.039 [Pipeline] sh 00:10:48.320 + vagrant ssh-config --host vagrant 00:10:48.320 + sed -ne /^Host/,$p 00:10:48.320 + tee ssh_conf 00:10:52.556 Host vagrant 00:10:52.556 HostName 192.168.121.240 00:10:52.556 User vagrant 00:10:52.556 Port 22 00:10:52.556 UserKnownHostsFile /dev/null 00:10:52.556 StrictHostKeyChecking no 00:10:52.556 PasswordAuthentication no 00:10:52.556 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:10:52.556 IdentitiesOnly yes 00:10:52.556 LogLevel FATAL 00:10:52.556 ForwardAgent yes 00:10:52.556 ForwardX11 yes 00:10:52.556 00:10:52.569 [Pipeline] withEnv 00:10:52.571 [Pipeline] { 00:10:52.588 [Pipeline] sh 00:10:52.869 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:10:52.869 source /etc/os-release 00:10:52.869 [[ -e /image.version ]] && img=$(< /image.version) 00:10:52.869 # Minimal, systemd-like check. 00:10:52.869 if [[ -e /.dockerenv ]]; then 00:10:52.869 # Clear garbage from the node's name: 00:10:52.869 # agt-er_autotest_547-896 -> autotest_547-896 00:10:52.869 # $HOSTNAME is the actual container id 00:10:52.869 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:10:52.869 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:10:52.869 # We can assume this is a mount from a host where container is running, 00:10:52.869 # so fetch its hostname to easily identify the target swarm worker. 00:10:52.870 container="$(< /etc/hostname) ($agent)" 00:10:52.870 else 00:10:52.870 # Fallback 00:10:52.870 container=$agent 00:10:52.870 fi 00:10:52.870 fi 00:10:52.870 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:10:52.870 00:10:53.138 [Pipeline] } 00:10:53.158 [Pipeline] // withEnv 00:10:53.166 [Pipeline] setCustomBuildProperty 00:10:53.182 [Pipeline] stage 00:10:53.183 [Pipeline] { (Tests) 00:10:53.197 [Pipeline] sh 00:10:53.472 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:10:53.742 [Pipeline] sh 00:10:54.021 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:10:54.303 [Pipeline] timeout 00:10:54.303 Timeout set to expire in 30 min 00:10:54.306 [Pipeline] { 00:10:54.324 [Pipeline] sh 00:10:54.604 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:10:55.171 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:10:55.184 [Pipeline] sh 00:10:55.463 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:10:55.735 [Pipeline] sh 00:10:56.015 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:10:56.288 [Pipeline] sh 00:10:56.567 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:10:56.826 ++ readlink -f spdk_repo 00:10:56.826 + DIR_ROOT=/home/vagrant/spdk_repo 00:10:56.826 + [[ -n /home/vagrant/spdk_repo ]] 00:10:56.826 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:10:56.826 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:10:56.826 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:10:56.826 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:10:56.826 + [[ -d /home/vagrant/spdk_repo/output ]] 00:10:56.826 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:10:56.826 + cd /home/vagrant/spdk_repo 00:10:56.826 + source /etc/os-release 00:10:56.826 ++ NAME='Fedora Linux' 00:10:56.826 ++ VERSION='38 (Cloud Edition)' 00:10:56.826 ++ ID=fedora 00:10:56.826 ++ VERSION_ID=38 00:10:56.826 ++ VERSION_CODENAME= 00:10:56.826 ++ PLATFORM_ID=platform:f38 00:10:56.826 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:10:56.826 ++ ANSI_COLOR='0;38;2;60;110;180' 00:10:56.826 ++ LOGO=fedora-logo-icon 00:10:56.826 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:10:56.826 ++ HOME_URL=https://fedoraproject.org/ 00:10:56.826 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:10:56.826 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:10:56.826 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:10:56.826 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:10:56.826 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:10:56.826 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:10:56.826 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:10:56.826 ++ SUPPORT_END=2024-05-14 00:10:56.826 ++ VARIANT='Cloud Edition' 00:10:56.826 ++ VARIANT_ID=cloud 00:10:56.826 + uname -a 00:10:56.826 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:10:56.826 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:57.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:57.393 Hugepages 00:10:57.393 node hugesize free / total 00:10:57.393 node0 1048576kB 0 / 0 00:10:57.393 node0 2048kB 0 / 0 00:10:57.393 00:10:57.393 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:57.393 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:57.393 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:57.393 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:57.393 + rm -f /tmp/spdk-ld-path 00:10:57.393 + source autorun-spdk.conf 00:10:57.393 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:57.393 ++ SPDK_TEST_NVMF=1 00:10:57.393 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:57.393 ++ SPDK_TEST_URING=1 00:10:57.393 ++ SPDK_TEST_VFIOUSER=1 00:10:57.393 ++ SPDK_TEST_USDT=1 00:10:57.393 ++ SPDK_RUN_ASAN=1 00:10:57.393 ++ SPDK_RUN_UBSAN=1 00:10:57.393 ++ NET_TYPE=virt 00:10:57.393 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:57.393 ++ RUN_NIGHTLY=1 00:10:57.393 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:10:57.393 + [[ -n '' ]] 00:10:57.393 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:10:57.393 + for M in /var/spdk/build-*-manifest.txt 00:10:57.393 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:10:57.393 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:57.393 + for M in /var/spdk/build-*-manifest.txt 00:10:57.393 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:10:57.393 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:57.393 ++ uname 00:10:57.393 + [[ Linux == \L\i\n\u\x ]] 00:10:57.393 + sudo dmesg -T 00:10:57.393 + sudo dmesg --clear 00:10:57.393 + dmesg_pid=5165 00:10:57.393 + sudo dmesg -Tw 00:10:57.393 + [[ Fedora Linux == FreeBSD ]] 00:10:57.393 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.393 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.393 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:10:57.393 + [[ -x /usr/src/fio-static/fio ]] 00:10:57.393 + export FIO_BIN=/usr/src/fio-static/fio 00:10:57.393 + FIO_BIN=/usr/src/fio-static/fio 00:10:57.393 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:10:57.393 + [[ ! -v VFIO_QEMU_BIN ]] 00:10:57.393 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:10:57.393 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.393 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.393 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:10:57.393 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.393 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.393 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:57.393 Test configuration: 00:10:57.393 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:57.393 SPDK_TEST_NVMF=1 00:10:57.393 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:57.393 SPDK_TEST_URING=1 00:10:57.393 SPDK_TEST_VFIOUSER=1 00:10:57.393 SPDK_TEST_USDT=1 00:10:57.393 SPDK_RUN_ASAN=1 00:10:57.393 SPDK_RUN_UBSAN=1 00:10:57.393 NET_TYPE=virt 00:10:57.393 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:57.651 RUN_NIGHTLY=1 16:48:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.651 16:48:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:57.651 16:48:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.651 16:48:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.651 16:48:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.652 16:48:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.652 16:48:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.652 16:48:59 -- paths/export.sh@5 -- $ export PATH 00:10:57.652 16:48:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.652 16:48:59 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:10:57.652 16:48:59 -- common/autobuild_common.sh@447 -- $ date +%s 00:10:57.652 16:48:59 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721666939.XXXXXX 00:10:57.652 16:48:59 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721666939.ljiOIM 00:10:57.652 16:48:59 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:10:57.652 16:48:59 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:10:57.652 16:48:59 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:10:57.652 16:48:59 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:10:57.652 16:48:59 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:10:57.652 16:48:59 -- common/autobuild_common.sh@463 -- $ get_config_params 00:10:57.652 16:48:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:10:57.652 16:48:59 -- common/autotest_common.sh@10 -- $ set +x 00:10:57.652 16:48:59 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:10:57.652 16:48:59 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:10:57.652 16:48:59 -- pm/common@17 -- $ local monitor 00:10:57.652 16:48:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:57.652 16:48:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:57.652 16:48:59 -- pm/common@25 -- $ sleep 1 00:10:57.652 16:48:59 -- pm/common@21 -- $ date +%s 00:10:57.652 16:48:59 -- pm/common@21 -- $ date +%s 00:10:57.652 16:48:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721666939 00:10:57.652 16:48:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721666939 00:10:57.652 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721666939_collect-cpu-load.pm.log 00:10:57.652 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721666939_collect-vmstat.pm.log 00:10:58.587 16:49:00 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:10:58.587 16:49:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:10:58.587 16:49:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:10:58.587 16:49:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:10:58.587 16:49:00 -- spdk/autobuild.sh@16 -- $ date -u 00:10:58.587 Mon Jul 22 04:49:00 PM UTC 2024 00:10:58.587 16:49:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:10:58.587 v24.09-pre-297-gf7b31b2b9 00:10:58.587 16:49:00 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:10:58.587 16:49:00 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:10:58.587 16:49:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:10:58.587 16:49:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:10:58.587 16:49:00 -- common/autotest_common.sh@10 -- $ set +x 00:10:58.587 ************************************ 00:10:58.587 START TEST asan 00:10:58.587 ************************************ 00:10:58.587 using asan 00:10:58.587 16:49:00 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:10:58.587 00:10:58.587 real 0m0.000s 00:10:58.587 user 0m0.000s 00:10:58.587 sys 0m0.000s 00:10:58.587 16:49:00 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:10:58.587 16:49:00 asan -- common/autotest_common.sh@10 -- $ set +x 00:10:58.587 ************************************ 00:10:58.587 END TEST asan 00:10:58.587 ************************************ 00:10:58.587 16:49:00 -- common/autotest_common.sh@1142 -- $ return 0 00:10:58.587 16:49:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:10:58.587 16:49:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:10:58.587 16:49:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:10:58.587 16:49:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:10:58.587 16:49:00 -- common/autotest_common.sh@10 -- $ set +x 00:10:58.587 ************************************ 00:10:58.587 START TEST ubsan 00:10:58.587 ************************************ 00:10:58.587 using ubsan 00:10:58.587 16:49:00 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:10:58.587 00:10:58.587 real 0m0.000s 00:10:58.587 user 0m0.000s 00:10:58.588 sys 0m0.000s 00:10:58.588 16:49:00 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:10:58.588 ************************************ 00:10:58.588 END TEST ubsan 00:10:58.588 ************************************ 00:10:58.588 16:49:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:10:58.847 16:49:00 -- common/autotest_common.sh@1142 -- $ return 0 00:10:58.847 16:49:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:10:58.847 16:49:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:10:58.847 16:49:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:10:58.847 16:49:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:10:58.847 16:49:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:10:58.847 16:49:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:10:58.847 16:49:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:10:58.847 16:49:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:10:58.847 16:49:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:10:58.847 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:58.847 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:59.415 Using 'verbs' RDMA provider 00:11:15.668 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:11:27.863 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:11:28.429 Creating mk/config.mk...done. 00:11:28.429 Creating mk/cc.flags.mk...done. 00:11:28.429 Type 'make' to build. 00:11:28.429 16:49:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:11:28.429 16:49:29 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:11:28.429 16:49:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:11:28.429 16:49:29 -- common/autotest_common.sh@10 -- $ set +x 00:11:28.429 ************************************ 00:11:28.429 START TEST make 00:11:28.429 ************************************ 00:11:28.429 16:49:29 make -- common/autotest_common.sh@1123 -- $ make -j10 00:11:28.686 make[1]: Nothing to be done for 'all'. 00:11:30.085 The Meson build system 00:11:30.085 Version: 1.3.1 00:11:30.085 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:11:30.085 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:11:30.085 Build type: native build 00:11:30.085 Project name: libvfio-user 00:11:30.085 Project version: 0.0.1 00:11:30.085 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:11:30.085 C linker for the host machine: cc ld.bfd 2.39-16 00:11:30.085 Host machine cpu family: x86_64 00:11:30.085 Host machine cpu: x86_64 00:11:30.085 Run-time dependency threads found: YES 00:11:30.085 Library dl found: YES 00:11:30.085 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:11:30.085 Run-time dependency json-c found: YES 0.17 00:11:30.085 Run-time dependency cmocka found: YES 1.1.7 00:11:30.085 Program pytest-3 found: NO 00:11:30.085 Program flake8 found: NO 00:11:30.085 Program misspell-fixer found: NO 00:11:30.085 Program restructuredtext-lint found: NO 00:11:30.085 Program valgrind found: YES (/usr/bin/valgrind) 00:11:30.085 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:11:30.085 Compiler for C supports arguments -Wmissing-declarations: YES 00:11:30.085 Compiler for C supports arguments -Wwrite-strings: YES 00:11:30.085 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:11:30.085 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:11:30.085 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:11:30.085 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:11:30.085 Build targets in project: 8 00:11:30.085 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:11:30.085 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:11:30.085 00:11:30.085 libvfio-user 0.0.1 00:11:30.085 00:11:30.085 User defined options 00:11:30.085 buildtype : debug 00:11:30.085 default_library: shared 00:11:30.085 libdir : /usr/local/lib 00:11:30.085 00:11:30.085 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:30.343 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:11:30.601 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:11:30.601 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:11:30.601 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:11:30.601 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:11:30.601 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:11:30.601 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:11:30.601 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:11:30.601 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:11:30.601 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:11:30.601 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:11:30.601 [11/37] Compiling C object samples/null.p/null.c.o 00:11:30.601 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:11:30.601 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:11:30.601 [14/37] Compiling C object samples/client.p/client.c.o 00:11:30.859 [15/37] Linking target samples/client 00:11:30.859 [16/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:11:30.859 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:11:30.859 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:11:30.859 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:11:30.859 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:11:30.859 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:11:30.859 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:11:30.859 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:11:30.859 [24/37] Compiling C object samples/server.p/server.c.o 00:11:30.859 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:11:30.859 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:11:30.859 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:11:30.859 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:11:30.859 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:11:30.859 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:11:31.117 [31/37] Linking target test/unit_tests 00:11:31.117 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:11:31.117 [33/37] Linking target samples/server 00:11:31.117 [34/37] Linking target samples/lspci 00:11:31.117 [35/37] Linking target samples/shadow_ioeventfd_server 00:11:31.117 [36/37] Linking target samples/null 00:11:31.117 [37/37] Linking target samples/gpio-pci-idio-16 00:11:31.117 INFO: autodetecting backend as ninja 00:11:31.117 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:11:31.117 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:11:31.683 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:11:31.683 ninja: no work to do. 00:11:38.257 The Meson build system 00:11:38.257 Version: 1.3.1 00:11:38.257 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:11:38.257 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:11:38.257 Build type: native build 00:11:38.257 Program cat found: YES (/usr/bin/cat) 00:11:38.257 Project name: DPDK 00:11:38.257 Project version: 24.03.0 00:11:38.257 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:11:38.257 C linker for the host machine: cc ld.bfd 2.39-16 00:11:38.257 Host machine cpu family: x86_64 00:11:38.257 Host machine cpu: x86_64 00:11:38.257 Message: ## Building in Developer Mode ## 00:11:38.257 Program pkg-config found: YES (/usr/bin/pkg-config) 00:11:38.257 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:11:38.257 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:11:38.257 Program python3 found: YES (/usr/bin/python3) 00:11:38.257 Program cat found: YES (/usr/bin/cat) 00:11:38.257 Compiler for C supports arguments -march=native: YES 00:11:38.257 Checking for size of "void *" : 8 00:11:38.257 Checking for size of "void *" : 8 (cached) 00:11:38.257 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:11:38.257 Library m found: YES 00:11:38.257 Library numa found: YES 00:11:38.257 Has header "numaif.h" : YES 00:11:38.257 Library fdt found: NO 00:11:38.257 Library execinfo found: NO 00:11:38.257 Has header "execinfo.h" : YES 00:11:38.257 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:11:38.257 Run-time dependency libarchive found: NO (tried pkgconfig) 00:11:38.257 Run-time dependency libbsd found: NO (tried pkgconfig) 00:11:38.257 Run-time dependency jansson found: NO (tried pkgconfig) 00:11:38.257 Run-time dependency openssl found: YES 3.0.9 00:11:38.257 Run-time dependency libpcap found: YES 1.10.4 00:11:38.257 Has header "pcap.h" with dependency libpcap: YES 00:11:38.257 Compiler for C supports arguments -Wcast-qual: YES 00:11:38.257 Compiler for C supports arguments -Wdeprecated: YES 00:11:38.257 Compiler for C supports arguments -Wformat: YES 00:11:38.257 Compiler for C supports arguments -Wformat-nonliteral: NO 00:11:38.257 Compiler for C supports arguments -Wformat-security: NO 00:11:38.257 Compiler for C supports arguments -Wmissing-declarations: YES 00:11:38.257 Compiler for C supports arguments -Wmissing-prototypes: YES 00:11:38.257 Compiler for C supports arguments -Wnested-externs: YES 00:11:38.257 Compiler for C supports arguments -Wold-style-definition: YES 00:11:38.257 Compiler for C supports arguments -Wpointer-arith: YES 00:11:38.257 Compiler for C supports arguments -Wsign-compare: YES 00:11:38.257 Compiler for C supports arguments -Wstrict-prototypes: YES 00:11:38.257 Compiler for C supports arguments -Wundef: YES 00:11:38.257 Compiler for C supports arguments -Wwrite-strings: YES 00:11:38.257 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:11:38.257 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:11:38.257 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:11:38.257 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:11:38.257 Program objdump found: YES (/usr/bin/objdump) 00:11:38.257 Compiler for C supports arguments -mavx512f: YES 00:11:38.257 Checking if "AVX512 checking" compiles: YES 00:11:38.257 Fetching value of define "__SSE4_2__" : 1 00:11:38.257 Fetching value of define "__AES__" : 1 00:11:38.257 Fetching value of define "__AVX__" : 1 00:11:38.257 Fetching value of define "__AVX2__" : 1 00:11:38.257 Fetching value of define "__AVX512BW__" : 1 00:11:38.257 Fetching value of define "__AVX512CD__" : 1 00:11:38.257 Fetching value of define "__AVX512DQ__" : 1 00:11:38.257 Fetching value of define "__AVX512F__" : 1 00:11:38.257 Fetching value of define "__AVX512VL__" : 1 00:11:38.257 Fetching value of define "__PCLMUL__" : 1 00:11:38.257 Fetching value of define "__RDRND__" : 1 00:11:38.257 Fetching value of define "__RDSEED__" : 1 00:11:38.257 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:11:38.257 Fetching value of define "__znver1__" : (undefined) 00:11:38.257 Fetching value of define "__znver2__" : (undefined) 00:11:38.257 Fetching value of define "__znver3__" : (undefined) 00:11:38.257 Fetching value of define "__znver4__" : (undefined) 00:11:38.257 Library asan found: YES 00:11:38.257 Compiler for C supports arguments -Wno-format-truncation: YES 00:11:38.257 Message: lib/log: Defining dependency "log" 00:11:38.257 Message: lib/kvargs: Defining dependency "kvargs" 00:11:38.257 Message: lib/telemetry: Defining dependency "telemetry" 00:11:38.257 Library rt found: YES 00:11:38.257 Checking for function "getentropy" : NO 00:11:38.257 Message: lib/eal: Defining dependency "eal" 00:11:38.257 Message: lib/ring: Defining dependency "ring" 00:11:38.257 Message: lib/rcu: Defining dependency "rcu" 00:11:38.257 Message: lib/mempool: Defining dependency "mempool" 00:11:38.257 Message: lib/mbuf: Defining dependency "mbuf" 00:11:38.257 Fetching value of define "__PCLMUL__" : 1 (cached) 00:11:38.257 Fetching value of define "__AVX512F__" : 1 (cached) 00:11:38.257 Fetching value of define "__AVX512BW__" : 1 (cached) 00:11:38.257 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:11:38.257 Fetching value of define "__AVX512VL__" : 1 (cached) 00:11:38.257 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:11:38.257 Compiler for C supports arguments -mpclmul: YES 00:11:38.257 Compiler for C supports arguments -maes: YES 00:11:38.257 Compiler for C supports arguments -mavx512f: YES (cached) 00:11:38.257 Compiler for C supports arguments -mavx512bw: YES 00:11:38.257 Compiler for C supports arguments -mavx512dq: YES 00:11:38.257 Compiler for C supports arguments -mavx512vl: YES 00:11:38.257 Compiler for C supports arguments -mvpclmulqdq: YES 00:11:38.257 Compiler for C supports arguments -mavx2: YES 00:11:38.258 Compiler for C supports arguments -mavx: YES 00:11:38.258 Message: lib/net: Defining dependency "net" 00:11:38.258 Message: lib/meter: Defining dependency "meter" 00:11:38.258 Message: lib/ethdev: Defining dependency "ethdev" 00:11:38.258 Message: lib/pci: Defining dependency "pci" 00:11:38.258 Message: lib/cmdline: Defining dependency "cmdline" 00:11:38.258 Message: lib/hash: Defining dependency "hash" 00:11:38.258 Message: lib/timer: Defining dependency "timer" 00:11:38.258 Message: lib/compressdev: Defining dependency "compressdev" 00:11:38.258 Message: lib/cryptodev: Defining dependency "cryptodev" 00:11:38.258 Message: lib/dmadev: Defining dependency "dmadev" 00:11:38.258 Compiler for C supports arguments -Wno-cast-qual: YES 00:11:38.258 Message: lib/power: Defining dependency "power" 00:11:38.258 Message: lib/reorder: Defining dependency "reorder" 00:11:38.258 Message: lib/security: Defining dependency "security" 00:11:38.258 Has header "linux/userfaultfd.h" : YES 00:11:38.258 Has header "linux/vduse.h" : YES 00:11:38.258 Message: lib/vhost: Defining dependency "vhost" 00:11:38.258 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:11:38.258 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:11:38.258 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:11:38.258 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:11:38.258 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:11:38.258 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:11:38.258 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:11:38.258 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:11:38.258 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:11:38.258 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:11:38.258 Program doxygen found: YES (/usr/bin/doxygen) 00:11:38.258 Configuring doxy-api-html.conf using configuration 00:11:38.258 Configuring doxy-api-man.conf using configuration 00:11:38.258 Program mandb found: YES (/usr/bin/mandb) 00:11:38.258 Program sphinx-build found: NO 00:11:38.258 Configuring rte_build_config.h using configuration 00:11:38.258 Message: 00:11:38.258 ================= 00:11:38.258 Applications Enabled 00:11:38.258 ================= 00:11:38.258 00:11:38.258 apps: 00:11:38.258 00:11:38.258 00:11:38.258 Message: 00:11:38.258 ================= 00:11:38.258 Libraries Enabled 00:11:38.258 ================= 00:11:38.258 00:11:38.258 libs: 00:11:38.258 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:11:38.258 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:11:38.258 cryptodev, dmadev, power, reorder, security, vhost, 00:11:38.258 00:11:38.258 Message: 00:11:38.258 =============== 00:11:38.258 Drivers Enabled 00:11:38.258 =============== 00:11:38.258 00:11:38.258 common: 00:11:38.258 00:11:38.258 bus: 00:11:38.258 pci, vdev, 00:11:38.258 mempool: 00:11:38.258 ring, 00:11:38.258 dma: 00:11:38.258 00:11:38.258 net: 00:11:38.258 00:11:38.258 crypto: 00:11:38.258 00:11:38.258 compress: 00:11:38.258 00:11:38.258 vdpa: 00:11:38.258 00:11:38.258 00:11:38.258 Message: 00:11:38.258 ================= 00:11:38.258 Content Skipped 00:11:38.258 ================= 00:11:38.258 00:11:38.258 apps: 00:11:38.258 dumpcap: explicitly disabled via build config 00:11:38.258 graph: explicitly disabled via build config 00:11:38.258 pdump: explicitly disabled via build config 00:11:38.258 proc-info: explicitly disabled via build config 00:11:38.258 test-acl: explicitly disabled via build config 00:11:38.258 test-bbdev: explicitly disabled via build config 00:11:38.258 test-cmdline: explicitly disabled via build config 00:11:38.258 test-compress-perf: explicitly disabled via build config 00:11:38.258 test-crypto-perf: explicitly disabled via build config 00:11:38.258 test-dma-perf: explicitly disabled via build config 00:11:38.258 test-eventdev: explicitly disabled via build config 00:11:38.258 test-fib: explicitly disabled via build config 00:11:38.258 test-flow-perf: explicitly disabled via build config 00:11:38.258 test-gpudev: explicitly disabled via build config 00:11:38.258 test-mldev: explicitly disabled via build config 00:11:38.258 test-pipeline: explicitly disabled via build config 00:11:38.258 test-pmd: explicitly disabled via build config 00:11:38.258 test-regex: explicitly disabled via build config 00:11:38.258 test-sad: explicitly disabled via build config 00:11:38.258 test-security-perf: explicitly disabled via build config 00:11:38.258 00:11:38.258 libs: 00:11:38.258 argparse: explicitly disabled via build config 00:11:38.258 metrics: explicitly disabled via build config 00:11:38.258 acl: explicitly disabled via build config 00:11:38.258 bbdev: explicitly disabled via build config 00:11:38.258 bitratestats: explicitly disabled via build config 00:11:38.258 bpf: explicitly disabled via build config 00:11:38.258 cfgfile: explicitly disabled via build config 00:11:38.258 distributor: explicitly disabled via build config 00:11:38.258 efd: explicitly disabled via build config 00:11:38.258 eventdev: explicitly disabled via build config 00:11:38.258 dispatcher: explicitly disabled via build config 00:11:38.258 gpudev: explicitly disabled via build config 00:11:38.258 gro: explicitly disabled via build config 00:11:38.258 gso: explicitly disabled via build config 00:11:38.258 ip_frag: explicitly disabled via build config 00:11:38.258 jobstats: explicitly disabled via build config 00:11:38.258 latencystats: explicitly disabled via build config 00:11:38.258 lpm: explicitly disabled via build config 00:11:38.258 member: explicitly disabled via build config 00:11:38.258 pcapng: explicitly disabled via build config 00:11:38.258 rawdev: explicitly disabled via build config 00:11:38.258 regexdev: explicitly disabled via build config 00:11:38.258 mldev: explicitly disabled via build config 00:11:38.258 rib: explicitly disabled via build config 00:11:38.258 sched: explicitly disabled via build config 00:11:38.258 stack: explicitly disabled via build config 00:11:38.258 ipsec: explicitly disabled via build config 00:11:38.258 pdcp: explicitly disabled via build config 00:11:38.258 fib: explicitly disabled via build config 00:11:38.258 port: explicitly disabled via build config 00:11:38.258 pdump: explicitly disabled via build config 00:11:38.258 table: explicitly disabled via build config 00:11:38.258 pipeline: explicitly disabled via build config 00:11:38.258 graph: explicitly disabled via build config 00:11:38.258 node: explicitly disabled via build config 00:11:38.258 00:11:38.258 drivers: 00:11:38.258 common/cpt: not in enabled drivers build config 00:11:38.258 common/dpaax: not in enabled drivers build config 00:11:38.258 common/iavf: not in enabled drivers build config 00:11:38.258 common/idpf: not in enabled drivers build config 00:11:38.258 common/ionic: not in enabled drivers build config 00:11:38.258 common/mvep: not in enabled drivers build config 00:11:38.258 common/octeontx: not in enabled drivers build config 00:11:38.258 bus/auxiliary: not in enabled drivers build config 00:11:38.258 bus/cdx: not in enabled drivers build config 00:11:38.258 bus/dpaa: not in enabled drivers build config 00:11:38.258 bus/fslmc: not in enabled drivers build config 00:11:38.258 bus/ifpga: not in enabled drivers build config 00:11:38.258 bus/platform: not in enabled drivers build config 00:11:38.258 bus/uacce: not in enabled drivers build config 00:11:38.258 bus/vmbus: not in enabled drivers build config 00:11:38.258 common/cnxk: not in enabled drivers build config 00:11:38.258 common/mlx5: not in enabled drivers build config 00:11:38.258 common/nfp: not in enabled drivers build config 00:11:38.258 common/nitrox: not in enabled drivers build config 00:11:38.258 common/qat: not in enabled drivers build config 00:11:38.258 common/sfc_efx: not in enabled drivers build config 00:11:38.258 mempool/bucket: not in enabled drivers build config 00:11:38.258 mempool/cnxk: not in enabled drivers build config 00:11:38.258 mempool/dpaa: not in enabled drivers build config 00:11:38.258 mempool/dpaa2: not in enabled drivers build config 00:11:38.258 mempool/octeontx: not in enabled drivers build config 00:11:38.258 mempool/stack: not in enabled drivers build config 00:11:38.258 dma/cnxk: not in enabled drivers build config 00:11:38.258 dma/dpaa: not in enabled drivers build config 00:11:38.258 dma/dpaa2: not in enabled drivers build config 00:11:38.258 dma/hisilicon: not in enabled drivers build config 00:11:38.258 dma/idxd: not in enabled drivers build config 00:11:38.258 dma/ioat: not in enabled drivers build config 00:11:38.258 dma/skeleton: not in enabled drivers build config 00:11:38.258 net/af_packet: not in enabled drivers build config 00:11:38.258 net/af_xdp: not in enabled drivers build config 00:11:38.258 net/ark: not in enabled drivers build config 00:11:38.258 net/atlantic: not in enabled drivers build config 00:11:38.258 net/avp: not in enabled drivers build config 00:11:38.258 net/axgbe: not in enabled drivers build config 00:11:38.258 net/bnx2x: not in enabled drivers build config 00:11:38.258 net/bnxt: not in enabled drivers build config 00:11:38.258 net/bonding: not in enabled drivers build config 00:11:38.258 net/cnxk: not in enabled drivers build config 00:11:38.258 net/cpfl: not in enabled drivers build config 00:11:38.258 net/cxgbe: not in enabled drivers build config 00:11:38.258 net/dpaa: not in enabled drivers build config 00:11:38.258 net/dpaa2: not in enabled drivers build config 00:11:38.258 net/e1000: not in enabled drivers build config 00:11:38.258 net/ena: not in enabled drivers build config 00:11:38.258 net/enetc: not in enabled drivers build config 00:11:38.258 net/enetfec: not in enabled drivers build config 00:11:38.258 net/enic: not in enabled drivers build config 00:11:38.258 net/failsafe: not in enabled drivers build config 00:11:38.258 net/fm10k: not in enabled drivers build config 00:11:38.258 net/gve: not in enabled drivers build config 00:11:38.258 net/hinic: not in enabled drivers build config 00:11:38.258 net/hns3: not in enabled drivers build config 00:11:38.259 net/i40e: not in enabled drivers build config 00:11:38.259 net/iavf: not in enabled drivers build config 00:11:38.259 net/ice: not in enabled drivers build config 00:11:38.259 net/idpf: not in enabled drivers build config 00:11:38.259 net/igc: not in enabled drivers build config 00:11:38.259 net/ionic: not in enabled drivers build config 00:11:38.259 net/ipn3ke: not in enabled drivers build config 00:11:38.259 net/ixgbe: not in enabled drivers build config 00:11:38.259 net/mana: not in enabled drivers build config 00:11:38.259 net/memif: not in enabled drivers build config 00:11:38.259 net/mlx4: not in enabled drivers build config 00:11:38.259 net/mlx5: not in enabled drivers build config 00:11:38.259 net/mvneta: not in enabled drivers build config 00:11:38.259 net/mvpp2: not in enabled drivers build config 00:11:38.259 net/netvsc: not in enabled drivers build config 00:11:38.259 net/nfb: not in enabled drivers build config 00:11:38.259 net/nfp: not in enabled drivers build config 00:11:38.259 net/ngbe: not in enabled drivers build config 00:11:38.259 net/null: not in enabled drivers build config 00:11:38.259 net/octeontx: not in enabled drivers build config 00:11:38.259 net/octeon_ep: not in enabled drivers build config 00:11:38.259 net/pcap: not in enabled drivers build config 00:11:38.259 net/pfe: not in enabled drivers build config 00:11:38.259 net/qede: not in enabled drivers build config 00:11:38.259 net/ring: not in enabled drivers build config 00:11:38.259 net/sfc: not in enabled drivers build config 00:11:38.259 net/softnic: not in enabled drivers build config 00:11:38.259 net/tap: not in enabled drivers build config 00:11:38.259 net/thunderx: not in enabled drivers build config 00:11:38.259 net/txgbe: not in enabled drivers build config 00:11:38.259 net/vdev_netvsc: not in enabled drivers build config 00:11:38.259 net/vhost: not in enabled drivers build config 00:11:38.259 net/virtio: not in enabled drivers build config 00:11:38.259 net/vmxnet3: not in enabled drivers build config 00:11:38.259 raw/*: missing internal dependency, "rawdev" 00:11:38.259 crypto/armv8: not in enabled drivers build config 00:11:38.259 crypto/bcmfs: not in enabled drivers build config 00:11:38.259 crypto/caam_jr: not in enabled drivers build config 00:11:38.259 crypto/ccp: not in enabled drivers build config 00:11:38.259 crypto/cnxk: not in enabled drivers build config 00:11:38.259 crypto/dpaa_sec: not in enabled drivers build config 00:11:38.259 crypto/dpaa2_sec: not in enabled drivers build config 00:11:38.259 crypto/ipsec_mb: not in enabled drivers build config 00:11:38.259 crypto/mlx5: not in enabled drivers build config 00:11:38.259 crypto/mvsam: not in enabled drivers build config 00:11:38.259 crypto/nitrox: not in enabled drivers build config 00:11:38.259 crypto/null: not in enabled drivers build config 00:11:38.259 crypto/octeontx: not in enabled drivers build config 00:11:38.259 crypto/openssl: not in enabled drivers build config 00:11:38.259 crypto/scheduler: not in enabled drivers build config 00:11:38.259 crypto/uadk: not in enabled drivers build config 00:11:38.259 crypto/virtio: not in enabled drivers build config 00:11:38.259 compress/isal: not in enabled drivers build config 00:11:38.259 compress/mlx5: not in enabled drivers build config 00:11:38.259 compress/nitrox: not in enabled drivers build config 00:11:38.259 compress/octeontx: not in enabled drivers build config 00:11:38.259 compress/zlib: not in enabled drivers build config 00:11:38.259 regex/*: missing internal dependency, "regexdev" 00:11:38.259 ml/*: missing internal dependency, "mldev" 00:11:38.259 vdpa/ifc: not in enabled drivers build config 00:11:38.259 vdpa/mlx5: not in enabled drivers build config 00:11:38.259 vdpa/nfp: not in enabled drivers build config 00:11:38.259 vdpa/sfc: not in enabled drivers build config 00:11:38.259 event/*: missing internal dependency, "eventdev" 00:11:38.259 baseband/*: missing internal dependency, "bbdev" 00:11:38.259 gpu/*: missing internal dependency, "gpudev" 00:11:38.259 00:11:38.259 00:11:38.844 Build targets in project: 85 00:11:38.844 00:11:38.844 DPDK 24.03.0 00:11:38.844 00:11:38.844 User defined options 00:11:38.844 buildtype : debug 00:11:38.844 default_library : shared 00:11:38.844 libdir : lib 00:11:38.844 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:38.844 b_sanitize : address 00:11:38.844 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:11:38.844 c_link_args : 00:11:38.844 cpu_instruction_set: native 00:11:38.844 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:11:38.844 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:11:38.844 enable_docs : false 00:11:38.844 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:11:38.844 enable_kmods : false 00:11:38.844 max_lcores : 128 00:11:38.844 tests : false 00:11:38.844 00:11:38.844 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:39.102 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:11:39.360 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:11:39.360 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:11:39.360 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:11:39.360 [4/268] Linking static target lib/librte_kvargs.a 00:11:39.360 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:11:39.360 [6/268] Linking static target lib/librte_log.a 00:11:39.618 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:11:39.618 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:11:39.619 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:11:39.619 [10/268] Linking static target lib/librte_telemetry.a 00:11:39.876 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:11:39.876 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:11:39.876 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:11:39.876 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:11:39.876 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:11:39.876 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:11:39.876 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:11:40.133 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:11:40.391 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:11:40.391 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.391 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:11:40.391 [22/268] Linking target lib/librte_log.so.24.1 00:11:40.649 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:11:40.649 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:11:40.649 [25/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.649 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:11:40.649 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:11:40.649 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:11:40.649 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:11:40.907 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:11:40.907 [31/268] Linking target lib/librte_kvargs.so.24.1 00:11:40.907 [32/268] Linking target lib/librte_telemetry.so.24.1 00:11:41.166 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:11:41.166 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:11:41.166 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:11:41.166 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:11:41.166 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:11:41.166 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:11:41.166 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:11:41.424 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:11:41.424 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:11:41.424 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:11:41.424 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:11:41.424 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:11:41.424 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:11:41.682 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:11:41.682 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:11:41.939 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:11:41.939 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:11:42.196 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:11:42.196 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:11:42.196 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:11:42.196 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:11:42.196 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:11:42.196 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:11:42.454 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:11:42.454 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:11:42.454 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:11:42.454 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:11:42.711 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:11:42.711 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:11:42.711 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:11:42.711 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:11:42.711 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:11:42.711 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:11:42.969 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:11:42.969 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:11:43.227 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:11:43.227 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:11:43.227 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:11:43.484 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:11:43.484 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:11:43.484 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:11:43.484 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:11:43.742 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:11:43.742 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:11:43.742 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:11:43.742 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:11:43.742 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:11:43.999 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:11:43.999 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:11:43.999 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:11:44.257 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:11:44.257 [84/268] Linking static target lib/librte_ring.a 00:11:44.257 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:11:44.257 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:11:44.515 [87/268] Linking static target lib/librte_eal.a 00:11:44.515 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:11:44.773 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:11:44.773 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:11:44.773 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:11:44.773 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:11:44.773 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:11:44.773 [94/268] Linking static target lib/librte_mempool.a 00:11:44.773 [95/268] Linking static target lib/librte_rcu.a 00:11:44.773 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:11:45.031 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:11:45.031 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:11:45.289 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:11:45.289 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:11:45.547 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:11:45.547 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:11:45.547 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:11:45.813 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:11:45.813 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:11:45.813 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:11:45.813 [107/268] Linking static target lib/librte_net.a 00:11:46.079 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:11:46.079 [109/268] Linking static target lib/librte_mbuf.a 00:11:46.079 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:11:46.079 [111/268] Linking static target lib/librte_meter.a 00:11:46.079 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.335 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:11:46.335 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.335 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:11:46.335 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:11:46.592 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:11:46.592 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:11:47.156 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:11:47.156 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:11:47.156 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:11:47.156 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:11:47.414 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:11:47.695 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:11:47.695 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:11:47.695 [126/268] Linking static target lib/librte_pci.a 00:11:47.695 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:11:47.953 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:11:47.953 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:11:47.953 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:11:47.953 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:11:47.953 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:11:48.211 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:11:48.211 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:48.211 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:11:48.211 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:11:48.211 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:11:48.211 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:11:48.211 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:11:48.468 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:11:48.468 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:11:48.468 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:11:48.468 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:11:48.468 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:11:48.468 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:11:48.726 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:11:48.726 [147/268] Linking static target lib/librte_cmdline.a 00:11:48.984 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:11:48.984 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:11:49.242 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:11:49.242 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:11:49.242 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:11:49.242 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:11:49.242 [154/268] Linking static target lib/librte_timer.a 00:11:49.242 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:11:49.544 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:11:49.802 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:11:49.802 [158/268] Linking static target lib/librte_compressdev.a 00:11:49.802 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:11:49.802 [160/268] Linking static target lib/librte_hash.a 00:11:50.059 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:11:50.059 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:11:50.059 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:11:50.059 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:11:50.059 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:11:50.059 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:11:50.059 [167/268] Linking static target lib/librte_ethdev.a 00:11:50.059 [168/268] Linking static target lib/librte_dmadev.a 00:11:50.059 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:11:50.625 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:11:50.625 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:11:50.625 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:11:50.625 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:11:50.625 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:11:50.883 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:50.883 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:11:50.883 [177/268] Linking static target lib/librte_cryptodev.a 00:11:51.184 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:11:51.184 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:11:51.184 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:11:51.184 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:51.184 [182/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:11:51.184 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:11:51.184 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:11:51.442 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:11:51.442 [186/268] Linking static target lib/librte_power.a 00:11:51.700 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:11:51.959 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:11:51.960 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:11:51.960 [190/268] Linking static target lib/librte_reorder.a 00:11:51.960 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:11:51.960 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:11:52.217 [193/268] Linking static target lib/librte_security.a 00:11:52.217 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:11:52.474 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:11:52.732 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:11:52.732 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:11:52.990 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:11:52.990 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:11:53.250 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:11:53.250 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:11:53.513 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:11:53.513 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:11:53.513 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:11:53.778 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:11:53.778 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:53.778 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:11:53.778 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:11:53.778 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:11:54.045 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:11:54.046 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:11:54.046 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:11:54.046 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:11:54.317 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:54.317 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:54.317 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:54.317 [217/268] Linking static target drivers/librte_bus_vdev.a 00:11:54.317 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:54.317 [219/268] Linking static target drivers/librte_bus_pci.a 00:11:54.317 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:11:54.317 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:11:54.590 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:54.590 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:11:54.590 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:54.590 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:54.590 [226/268] Linking static target drivers/librte_mempool_ring.a 00:11:54.861 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:56.255 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:57.187 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:11:57.444 [230/268] Linking target lib/librte_eal.so.24.1 00:11:57.444 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:11:57.444 [232/268] Linking target lib/librte_ring.so.24.1 00:11:57.702 [233/268] Linking target lib/librte_meter.so.24.1 00:11:57.702 [234/268] Linking target lib/librte_pci.so.24.1 00:11:57.702 [235/268] Linking target lib/librte_timer.so.24.1 00:11:57.702 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:11:57.702 [237/268] Linking target lib/librte_dmadev.so.24.1 00:11:57.702 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:11:57.702 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:11:57.702 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:11:57.702 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:11:57.702 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:11:57.702 [243/268] Linking target lib/librte_rcu.so.24.1 00:11:57.702 [244/268] Linking target lib/librte_mempool.so.24.1 00:11:57.702 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:11:57.960 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:11:57.960 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:11:57.960 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:11:57.960 [249/268] Linking target lib/librte_mbuf.so.24.1 00:11:58.219 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:11:58.219 [251/268] Linking target lib/librte_compressdev.so.24.1 00:11:58.219 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:11:58.219 [253/268] Linking target lib/librte_reorder.so.24.1 00:11:58.219 [254/268] Linking target lib/librte_net.so.24.1 00:11:58.476 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:11:58.476 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:11:58.476 [257/268] Linking target lib/librte_cmdline.so.24.1 00:11:58.476 [258/268] Linking target lib/librte_hash.so.24.1 00:11:58.476 [259/268] Linking target lib/librte_security.so.24.1 00:11:58.734 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:11:58.734 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:58.992 [262/268] Linking target lib/librte_ethdev.so.24.1 00:11:58.992 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:11:58.992 [264/268] Linking target lib/librte_power.so.24.1 00:11:59.960 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:59.960 [266/268] Linking static target lib/librte_vhost.a 00:12:01.859 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:12:02.117 [268/268] Linking target lib/librte_vhost.so.24.1 00:12:02.117 INFO: autodetecting backend as ninja 00:12:02.117 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:12:03.499 CC lib/ut/ut.o 00:12:03.499 CC lib/ut_mock/mock.o 00:12:03.499 CC lib/log/log.o 00:12:03.499 CC lib/log/log_flags.o 00:12:03.499 CC lib/log/log_deprecated.o 00:12:03.499 LIB libspdk_ut_mock.a 00:12:03.499 LIB libspdk_log.a 00:12:03.499 LIB libspdk_ut.a 00:12:03.499 SO libspdk_ut_mock.so.6.0 00:12:03.499 SO libspdk_log.so.7.0 00:12:03.499 SO libspdk_ut.so.2.0 00:12:03.785 SYMLINK libspdk_ut_mock.so 00:12:03.785 SYMLINK libspdk_log.so 00:12:03.785 SYMLINK libspdk_ut.so 00:12:03.785 CXX lib/trace_parser/trace.o 00:12:03.785 CC lib/util/bit_array.o 00:12:03.785 CC lib/util/base64.o 00:12:04.051 CC lib/util/cpuset.o 00:12:04.051 CC lib/util/crc16.o 00:12:04.051 CC lib/util/crc32.o 00:12:04.051 CC lib/util/crc32c.o 00:12:04.051 CC lib/dma/dma.o 00:12:04.051 CC lib/ioat/ioat.o 00:12:04.051 CC lib/vfio_user/host/vfio_user_pci.o 00:12:04.051 CC lib/util/crc32_ieee.o 00:12:04.051 CC lib/util/crc64.o 00:12:04.051 CC lib/util/dif.o 00:12:04.051 CC lib/vfio_user/host/vfio_user.o 00:12:04.051 CC lib/util/fd.o 00:12:04.051 LIB libspdk_dma.a 00:12:04.317 SO libspdk_dma.so.4.0 00:12:04.317 SYMLINK libspdk_dma.so 00:12:04.317 CC lib/util/fd_group.o 00:12:04.317 CC lib/util/file.o 00:12:04.317 CC lib/util/hexlify.o 00:12:04.317 CC lib/util/iov.o 00:12:04.317 CC lib/util/math.o 00:12:04.317 CC lib/util/net.o 00:12:04.317 LIB libspdk_vfio_user.a 00:12:04.317 CC lib/util/pipe.o 00:12:04.317 CC lib/util/strerror_tls.o 00:12:04.317 SO libspdk_vfio_user.so.5.0 00:12:04.585 CC lib/util/string.o 00:12:04.585 LIB libspdk_ioat.a 00:12:04.585 SYMLINK libspdk_vfio_user.so 00:12:04.585 CC lib/util/uuid.o 00:12:04.585 CC lib/util/xor.o 00:12:04.585 SO libspdk_ioat.so.7.0 00:12:04.585 CC lib/util/zipf.o 00:12:04.585 SYMLINK libspdk_ioat.so 00:12:04.858 LIB libspdk_util.a 00:12:05.131 SO libspdk_util.so.10.0 00:12:05.131 LIB libspdk_trace_parser.a 00:12:05.131 SO libspdk_trace_parser.so.5.0 00:12:05.131 SYMLINK libspdk_util.so 00:12:05.393 SYMLINK libspdk_trace_parser.so 00:12:05.393 CC lib/rdma_utils/rdma_utils.o 00:12:05.393 CC lib/json/json_parse.o 00:12:05.393 CC lib/json/json_write.o 00:12:05.393 CC lib/conf/conf.o 00:12:05.393 CC lib/json/json_util.o 00:12:05.393 CC lib/rdma_provider/common.o 00:12:05.393 CC lib/rdma_provider/rdma_provider_verbs.o 00:12:05.393 CC lib/vmd/vmd.o 00:12:05.393 CC lib/env_dpdk/env.o 00:12:05.393 CC lib/idxd/idxd.o 00:12:05.650 LIB libspdk_conf.a 00:12:05.650 CC lib/env_dpdk/memory.o 00:12:05.650 SO libspdk_conf.so.6.0 00:12:05.650 LIB libspdk_rdma_utils.a 00:12:05.650 CC lib/env_dpdk/pci.o 00:12:05.650 CC lib/env_dpdk/init.o 00:12:05.650 LIB libspdk_rdma_provider.a 00:12:05.650 SO libspdk_rdma_utils.so.1.0 00:12:05.650 SYMLINK libspdk_conf.so 00:12:05.908 SO libspdk_rdma_provider.so.6.0 00:12:05.908 LIB libspdk_json.a 00:12:05.908 CC lib/env_dpdk/threads.o 00:12:05.908 SYMLINK libspdk_rdma_utils.so 00:12:05.908 CC lib/env_dpdk/pci_ioat.o 00:12:05.908 SO libspdk_json.so.6.0 00:12:05.908 SYMLINK libspdk_rdma_provider.so 00:12:05.908 CC lib/env_dpdk/pci_virtio.o 00:12:05.908 SYMLINK libspdk_json.so 00:12:05.908 CC lib/env_dpdk/pci_vmd.o 00:12:05.908 CC lib/env_dpdk/pci_idxd.o 00:12:05.908 CC lib/env_dpdk/pci_event.o 00:12:06.165 CC lib/idxd/idxd_user.o 00:12:06.165 CC lib/idxd/idxd_kernel.o 00:12:06.165 CC lib/env_dpdk/sigbus_handler.o 00:12:06.165 CC lib/env_dpdk/pci_dpdk.o 00:12:06.165 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:06.165 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:06.422 CC lib/vmd/led.o 00:12:06.422 LIB libspdk_idxd.a 00:12:06.422 SO libspdk_idxd.so.12.0 00:12:06.422 CC lib/jsonrpc/jsonrpc_server.o 00:12:06.422 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:06.422 CC lib/jsonrpc/jsonrpc_client.o 00:12:06.422 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:06.422 SYMLINK libspdk_idxd.so 00:12:06.422 LIB libspdk_vmd.a 00:12:06.422 SO libspdk_vmd.so.6.0 00:12:06.680 SYMLINK libspdk_vmd.so 00:12:06.680 LIB libspdk_jsonrpc.a 00:12:06.680 SO libspdk_jsonrpc.so.6.0 00:12:06.938 SYMLINK libspdk_jsonrpc.so 00:12:07.197 CC lib/rpc/rpc.o 00:12:07.456 LIB libspdk_rpc.a 00:12:07.456 LIB libspdk_env_dpdk.a 00:12:07.456 SO libspdk_rpc.so.6.0 00:12:07.713 SYMLINK libspdk_rpc.so 00:12:07.713 SO libspdk_env_dpdk.so.15.0 00:12:07.971 CC lib/notify/notify.o 00:12:07.971 CC lib/notify/notify_rpc.o 00:12:07.971 CC lib/trace/trace.o 00:12:07.971 CC lib/trace/trace_rpc.o 00:12:07.971 CC lib/trace/trace_flags.o 00:12:07.971 CC lib/keyring/keyring.o 00:12:07.971 CC lib/keyring/keyring_rpc.o 00:12:07.971 SYMLINK libspdk_env_dpdk.so 00:12:08.228 LIB libspdk_notify.a 00:12:08.228 SO libspdk_notify.so.6.0 00:12:08.228 LIB libspdk_trace.a 00:12:08.228 SYMLINK libspdk_notify.so 00:12:08.228 SO libspdk_trace.so.10.0 00:12:08.228 LIB libspdk_keyring.a 00:12:08.228 SO libspdk_keyring.so.1.0 00:12:08.485 SYMLINK libspdk_trace.so 00:12:08.485 SYMLINK libspdk_keyring.so 00:12:08.742 CC lib/sock/sock.o 00:12:08.742 CC lib/sock/sock_rpc.o 00:12:08.742 CC lib/thread/thread.o 00:12:08.742 CC lib/thread/iobuf.o 00:12:09.035 LIB libspdk_sock.a 00:12:09.293 SO libspdk_sock.so.10.0 00:12:09.293 SYMLINK libspdk_sock.so 00:12:09.551 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:09.551 CC lib/nvme/nvme_ctrlr.o 00:12:09.551 CC lib/nvme/nvme_fabric.o 00:12:09.551 CC lib/nvme/nvme_pcie_common.o 00:12:09.551 CC lib/nvme/nvme_qpair.o 00:12:09.551 CC lib/nvme/nvme_pcie.o 00:12:09.551 CC lib/nvme/nvme_ns_cmd.o 00:12:09.551 CC lib/nvme/nvme_ns.o 00:12:09.551 CC lib/nvme/nvme.o 00:12:10.491 CC lib/nvme/nvme_quirks.o 00:12:10.491 CC lib/nvme/nvme_transport.o 00:12:10.491 LIB libspdk_thread.a 00:12:10.491 CC lib/nvme/nvme_discovery.o 00:12:10.491 SO libspdk_thread.so.10.1 00:12:10.755 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:10.755 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:10.755 SYMLINK libspdk_thread.so 00:12:10.755 CC lib/nvme/nvme_tcp.o 00:12:10.755 CC lib/nvme/nvme_opal.o 00:12:10.755 CC lib/nvme/nvme_io_msg.o 00:12:11.012 CC lib/nvme/nvme_poll_group.o 00:12:11.012 CC lib/nvme/nvme_zns.o 00:12:11.270 CC lib/nvme/nvme_stubs.o 00:12:11.270 CC lib/nvme/nvme_auth.o 00:12:11.270 CC lib/nvme/nvme_cuse.o 00:12:11.270 CC lib/nvme/nvme_vfio_user.o 00:12:11.529 CC lib/accel/accel.o 00:12:11.529 CC lib/blob/blobstore.o 00:12:11.529 CC lib/accel/accel_rpc.o 00:12:11.529 CC lib/nvme/nvme_rdma.o 00:12:11.787 CC lib/accel/accel_sw.o 00:12:12.044 CC lib/blob/request.o 00:12:12.044 CC lib/blob/zeroes.o 00:12:12.044 CC lib/init/json_config.o 00:12:12.310 CC lib/blob/blob_bs_dev.o 00:12:12.311 CC lib/init/subsystem.o 00:12:12.311 CC lib/virtio/virtio.o 00:12:12.582 CC lib/vfu_tgt/tgt_endpoint.o 00:12:12.582 CC lib/init/subsystem_rpc.o 00:12:12.582 CC lib/init/rpc.o 00:12:12.582 CC lib/virtio/virtio_vhost_user.o 00:12:12.582 CC lib/virtio/virtio_vfio_user.o 00:12:12.582 CC lib/virtio/virtio_pci.o 00:12:12.582 CC lib/vfu_tgt/tgt_rpc.o 00:12:12.839 LIB libspdk_init.a 00:12:12.839 SO libspdk_init.so.5.0 00:12:12.839 SYMLINK libspdk_init.so 00:12:12.839 LIB libspdk_vfu_tgt.a 00:12:12.839 LIB libspdk_accel.a 00:12:13.096 SO libspdk_vfu_tgt.so.3.0 00:12:13.096 LIB libspdk_virtio.a 00:12:13.096 SO libspdk_accel.so.16.0 00:12:13.096 SO libspdk_virtio.so.7.0 00:12:13.096 SYMLINK libspdk_vfu_tgt.so 00:12:13.096 SYMLINK libspdk_virtio.so 00:12:13.096 SYMLINK libspdk_accel.so 00:12:13.096 CC lib/event/app.o 00:12:13.096 CC lib/event/reactor.o 00:12:13.096 CC lib/event/log_rpc.o 00:12:13.096 CC lib/event/app_rpc.o 00:12:13.096 CC lib/event/scheduler_static.o 00:12:13.354 LIB libspdk_nvme.a 00:12:13.354 CC lib/bdev/bdev_zone.o 00:12:13.354 CC lib/bdev/bdev_rpc.o 00:12:13.354 CC lib/bdev/part.o 00:12:13.354 CC lib/bdev/bdev.o 00:12:13.354 CC lib/bdev/scsi_nvme.o 00:12:13.611 SO libspdk_nvme.so.13.1 00:12:13.869 LIB libspdk_event.a 00:12:13.869 SO libspdk_event.so.14.0 00:12:13.869 SYMLINK libspdk_event.so 00:12:13.869 SYMLINK libspdk_nvme.so 00:12:15.770 LIB libspdk_blob.a 00:12:15.770 SO libspdk_blob.so.11.0 00:12:15.770 SYMLINK libspdk_blob.so 00:12:16.028 CC lib/blobfs/tree.o 00:12:16.028 CC lib/blobfs/blobfs.o 00:12:16.028 CC lib/lvol/lvol.o 00:12:16.594 LIB libspdk_bdev.a 00:12:16.850 SO libspdk_bdev.so.16.0 00:12:16.850 SYMLINK libspdk_bdev.so 00:12:17.107 CC lib/nbd/nbd.o 00:12:17.107 CC lib/nbd/nbd_rpc.o 00:12:17.107 CC lib/ublk/ublk.o 00:12:17.107 CC lib/ublk/ublk_rpc.o 00:12:17.107 CC lib/nvmf/ctrlr.o 00:12:17.107 CC lib/nvmf/ctrlr_discovery.o 00:12:17.107 CC lib/ftl/ftl_core.o 00:12:17.107 CC lib/scsi/dev.o 00:12:17.365 LIB libspdk_blobfs.a 00:12:17.365 SO libspdk_blobfs.so.10.0 00:12:17.365 CC lib/nvmf/ctrlr_bdev.o 00:12:17.365 CC lib/nvmf/subsystem.o 00:12:17.365 SYMLINK libspdk_blobfs.so 00:12:17.365 CC lib/nvmf/nvmf.o 00:12:17.625 CC lib/scsi/lun.o 00:12:17.625 LIB libspdk_nbd.a 00:12:17.625 LIB libspdk_lvol.a 00:12:17.625 CC lib/ftl/ftl_init.o 00:12:17.625 SO libspdk_nbd.so.7.0 00:12:17.625 SO libspdk_lvol.so.10.0 00:12:17.902 SYMLINK libspdk_lvol.so 00:12:17.902 SYMLINK libspdk_nbd.so 00:12:17.902 CC lib/nvmf/nvmf_rpc.o 00:12:17.902 CC lib/scsi/port.o 00:12:17.902 CC lib/nvmf/transport.o 00:12:17.902 CC lib/ftl/ftl_layout.o 00:12:17.902 CC lib/ftl/ftl_debug.o 00:12:17.902 CC lib/scsi/scsi.o 00:12:17.902 LIB libspdk_ublk.a 00:12:18.160 SO libspdk_ublk.so.3.0 00:12:18.160 SYMLINK libspdk_ublk.so 00:12:18.160 CC lib/scsi/scsi_bdev.o 00:12:18.160 CC lib/scsi/scsi_pr.o 00:12:18.160 CC lib/ftl/ftl_io.o 00:12:18.417 CC lib/ftl/ftl_sb.o 00:12:18.417 CC lib/nvmf/tcp.o 00:12:18.675 CC lib/nvmf/stubs.o 00:12:18.675 CC lib/ftl/ftl_l2p.o 00:12:18.675 CC lib/nvmf/mdns_server.o 00:12:18.675 CC lib/ftl/ftl_l2p_flat.o 00:12:18.675 CC lib/ftl/ftl_nv_cache.o 00:12:18.932 CC lib/scsi/scsi_rpc.o 00:12:18.932 CC lib/scsi/task.o 00:12:18.932 CC lib/nvmf/vfio_user.o 00:12:18.932 CC lib/nvmf/rdma.o 00:12:18.932 CC lib/nvmf/auth.o 00:12:19.189 LIB libspdk_scsi.a 00:12:19.189 CC lib/ftl/ftl_band.o 00:12:19.189 CC lib/ftl/ftl_band_ops.o 00:12:19.189 SO libspdk_scsi.so.9.0 00:12:19.189 CC lib/ftl/ftl_writer.o 00:12:19.189 SYMLINK libspdk_scsi.so 00:12:19.446 CC lib/ftl/ftl_rq.o 00:12:19.446 CC lib/ftl/ftl_reloc.o 00:12:19.446 CC lib/ftl/ftl_l2p_cache.o 00:12:19.703 CC lib/ftl/ftl_p2l.o 00:12:19.703 CC lib/iscsi/conn.o 00:12:19.703 CC lib/vhost/vhost.o 00:12:19.961 CC lib/vhost/vhost_rpc.o 00:12:19.961 CC lib/vhost/vhost_scsi.o 00:12:19.961 CC lib/vhost/vhost_blk.o 00:12:20.219 CC lib/ftl/mngt/ftl_mngt.o 00:12:20.219 CC lib/iscsi/init_grp.o 00:12:20.477 CC lib/iscsi/iscsi.o 00:12:20.477 CC lib/iscsi/md5.o 00:12:20.477 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:12:20.477 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:12:20.734 CC lib/ftl/mngt/ftl_mngt_startup.o 00:12:20.734 CC lib/ftl/mngt/ftl_mngt_md.o 00:12:20.734 CC lib/ftl/mngt/ftl_mngt_misc.o 00:12:20.734 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:12:20.734 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:12:20.992 CC lib/ftl/mngt/ftl_mngt_band.o 00:12:20.992 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:12:20.992 CC lib/vhost/rte_vhost_user.o 00:12:20.992 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:12:20.992 CC lib/iscsi/param.o 00:12:20.992 CC lib/iscsi/portal_grp.o 00:12:20.992 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:12:21.250 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:12:21.250 CC lib/iscsi/tgt_node.o 00:12:21.250 CC lib/iscsi/iscsi_subsystem.o 00:12:21.250 CC lib/ftl/utils/ftl_conf.o 00:12:21.509 CC lib/ftl/utils/ftl_md.o 00:12:21.509 CC lib/ftl/utils/ftl_mempool.o 00:12:21.509 CC lib/iscsi/iscsi_rpc.o 00:12:21.509 CC lib/iscsi/task.o 00:12:21.509 CC lib/ftl/utils/ftl_bitmap.o 00:12:21.811 CC lib/ftl/utils/ftl_property.o 00:12:21.811 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:12:21.811 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:12:21.811 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:12:21.811 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:12:22.069 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:12:22.069 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:12:22.069 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:12:22.069 CC lib/ftl/upgrade/ftl_sb_v3.o 00:12:22.069 CC lib/ftl/upgrade/ftl_sb_v5.o 00:12:22.069 CC lib/ftl/nvc/ftl_nvc_dev.o 00:12:22.069 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:12:22.069 LIB libspdk_nvmf.a 00:12:22.327 LIB libspdk_iscsi.a 00:12:22.327 CC lib/ftl/base/ftl_base_dev.o 00:12:22.327 CC lib/ftl/base/ftl_base_bdev.o 00:12:22.327 CC lib/ftl/ftl_trace.o 00:12:22.327 LIB libspdk_vhost.a 00:12:22.327 SO libspdk_iscsi.so.8.0 00:12:22.327 SO libspdk_nvmf.so.19.0 00:12:22.327 SO libspdk_vhost.so.8.0 00:12:22.584 LIB libspdk_ftl.a 00:12:22.584 SYMLINK libspdk_vhost.so 00:12:22.584 SYMLINK libspdk_iscsi.so 00:12:22.584 SYMLINK libspdk_nvmf.so 00:12:22.842 SO libspdk_ftl.so.9.0 00:12:23.099 SYMLINK libspdk_ftl.so 00:12:23.664 CC module/env_dpdk/env_dpdk_rpc.o 00:12:23.664 CC module/vfu_device/vfu_virtio.o 00:12:23.664 CC module/scheduler/gscheduler/gscheduler.o 00:12:23.664 CC module/scheduler/dynamic/scheduler_dynamic.o 00:12:23.664 CC module/accel/error/accel_error.o 00:12:23.664 CC module/blob/bdev/blob_bdev.o 00:12:23.664 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:12:23.664 CC module/keyring/file/keyring.o 00:12:23.664 CC module/accel/ioat/accel_ioat.o 00:12:23.664 CC module/sock/posix/posix.o 00:12:23.664 LIB libspdk_env_dpdk_rpc.a 00:12:23.664 SO libspdk_env_dpdk_rpc.so.6.0 00:12:23.921 LIB libspdk_scheduler_dpdk_governor.a 00:12:23.921 CC module/accel/error/accel_error_rpc.o 00:12:23.921 SYMLINK libspdk_env_dpdk_rpc.so 00:12:23.921 CC module/keyring/file/keyring_rpc.o 00:12:23.921 SO libspdk_scheduler_dpdk_governor.so.4.0 00:12:23.921 CC module/vfu_device/vfu_virtio_blk.o 00:12:23.921 CC module/accel/ioat/accel_ioat_rpc.o 00:12:23.921 LIB libspdk_scheduler_gscheduler.a 00:12:23.921 LIB libspdk_scheduler_dynamic.a 00:12:23.921 SO libspdk_scheduler_gscheduler.so.4.0 00:12:23.921 SO libspdk_scheduler_dynamic.so.4.0 00:12:23.921 SYMLINK libspdk_scheduler_dpdk_governor.so 00:12:23.922 CC module/vfu_device/vfu_virtio_scsi.o 00:12:23.922 SYMLINK libspdk_scheduler_dynamic.so 00:12:23.922 LIB libspdk_blob_bdev.a 00:12:23.922 CC module/vfu_device/vfu_virtio_rpc.o 00:12:23.922 LIB libspdk_accel_error.a 00:12:23.922 LIB libspdk_keyring_file.a 00:12:23.922 SO libspdk_blob_bdev.so.11.0 00:12:23.922 LIB libspdk_accel_ioat.a 00:12:23.922 SYMLINK libspdk_scheduler_gscheduler.so 00:12:23.922 SO libspdk_accel_error.so.2.0 00:12:23.922 SO libspdk_accel_ioat.so.6.0 00:12:24.179 SO libspdk_keyring_file.so.1.0 00:12:24.179 SYMLINK libspdk_blob_bdev.so 00:12:24.179 SYMLINK libspdk_accel_error.so 00:12:24.179 SYMLINK libspdk_accel_ioat.so 00:12:24.179 SYMLINK libspdk_keyring_file.so 00:12:24.179 CC module/accel/dsa/accel_dsa.o 00:12:24.179 CC module/accel/iaa/accel_iaa.o 00:12:24.436 CC module/keyring/linux/keyring.o 00:12:24.436 CC module/sock/uring/uring.o 00:12:24.436 CC module/keyring/linux/keyring_rpc.o 00:12:24.436 CC module/bdev/delay/vbdev_delay.o 00:12:24.436 CC module/blobfs/bdev/blobfs_bdev.o 00:12:24.436 CC module/bdev/error/vbdev_error.o 00:12:24.436 LIB libspdk_keyring_linux.a 00:12:24.436 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:12:24.436 CC module/accel/iaa/accel_iaa_rpc.o 00:12:24.436 LIB libspdk_vfu_device.a 00:12:24.436 SO libspdk_keyring_linux.so.1.0 00:12:24.695 LIB libspdk_sock_posix.a 00:12:24.695 CC module/accel/dsa/accel_dsa_rpc.o 00:12:24.695 SO libspdk_vfu_device.so.3.0 00:12:24.695 SO libspdk_sock_posix.so.6.0 00:12:24.695 SYMLINK libspdk_keyring_linux.so 00:12:24.695 CC module/bdev/delay/vbdev_delay_rpc.o 00:12:24.695 LIB libspdk_accel_iaa.a 00:12:24.695 CC module/bdev/error/vbdev_error_rpc.o 00:12:24.695 LIB libspdk_blobfs_bdev.a 00:12:24.695 SYMLINK libspdk_vfu_device.so 00:12:24.695 SYMLINK libspdk_sock_posix.so 00:12:24.695 SO libspdk_accel_iaa.so.3.0 00:12:24.695 SO libspdk_blobfs_bdev.so.6.0 00:12:24.695 LIB libspdk_accel_dsa.a 00:12:24.952 SYMLINK libspdk_blobfs_bdev.so 00:12:24.952 SYMLINK libspdk_accel_iaa.so 00:12:24.952 SO libspdk_accel_dsa.so.5.0 00:12:24.952 SYMLINK libspdk_accel_dsa.so 00:12:24.952 LIB libspdk_bdev_error.a 00:12:24.952 CC module/bdev/gpt/gpt.o 00:12:24.952 SO libspdk_bdev_error.so.6.0 00:12:24.952 LIB libspdk_bdev_delay.a 00:12:24.952 CC module/bdev/lvol/vbdev_lvol.o 00:12:24.952 SO libspdk_bdev_delay.so.6.0 00:12:24.952 CC module/bdev/null/bdev_null.o 00:12:24.952 CC module/bdev/malloc/bdev_malloc.o 00:12:24.952 SYMLINK libspdk_bdev_error.so 00:12:24.952 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:12:24.952 CC module/bdev/passthru/vbdev_passthru.o 00:12:25.209 CC module/bdev/nvme/bdev_nvme.o 00:12:25.209 SYMLINK libspdk_bdev_delay.so 00:12:25.209 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:12:25.209 CC module/bdev/raid/bdev_raid.o 00:12:25.209 LIB libspdk_sock_uring.a 00:12:25.209 CC module/bdev/gpt/vbdev_gpt.o 00:12:25.209 SO libspdk_sock_uring.so.5.0 00:12:25.209 CC module/bdev/malloc/bdev_malloc_rpc.o 00:12:25.467 SYMLINK libspdk_sock_uring.so 00:12:25.467 CC module/bdev/null/bdev_null_rpc.o 00:12:25.467 CC module/bdev/nvme/bdev_nvme_rpc.o 00:12:25.467 LIB libspdk_bdev_passthru.a 00:12:25.467 CC module/bdev/nvme/nvme_rpc.o 00:12:25.467 CC module/bdev/raid/bdev_raid_rpc.o 00:12:25.467 LIB libspdk_bdev_malloc.a 00:12:25.467 LIB libspdk_bdev_null.a 00:12:25.467 SO libspdk_bdev_passthru.so.6.0 00:12:25.467 SO libspdk_bdev_malloc.so.6.0 00:12:25.467 SO libspdk_bdev_null.so.6.0 00:12:25.725 LIB libspdk_bdev_lvol.a 00:12:25.725 SYMLINK libspdk_bdev_malloc.so 00:12:25.725 CC module/bdev/raid/bdev_raid_sb.o 00:12:25.725 SYMLINK libspdk_bdev_null.so 00:12:25.725 LIB libspdk_bdev_gpt.a 00:12:25.725 CC module/bdev/raid/raid0.o 00:12:25.725 SO libspdk_bdev_lvol.so.6.0 00:12:25.725 SYMLINK libspdk_bdev_passthru.so 00:12:25.725 CC module/bdev/nvme/bdev_mdns_client.o 00:12:25.725 SO libspdk_bdev_gpt.so.6.0 00:12:25.725 CC module/bdev/nvme/vbdev_opal.o 00:12:25.725 SYMLINK libspdk_bdev_lvol.so 00:12:25.725 CC module/bdev/nvme/vbdev_opal_rpc.o 00:12:25.725 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:12:25.725 SYMLINK libspdk_bdev_gpt.so 00:12:25.725 CC module/bdev/raid/raid1.o 00:12:25.986 CC module/bdev/raid/concat.o 00:12:26.253 CC module/bdev/split/vbdev_split.o 00:12:26.253 CC module/bdev/split/vbdev_split_rpc.o 00:12:26.253 CC module/bdev/zone_block/vbdev_zone_block.o 00:12:26.253 CC module/bdev/uring/bdev_uring.o 00:12:26.253 CC module/bdev/aio/bdev_aio.o 00:12:26.253 CC module/bdev/ftl/bdev_ftl.o 00:12:26.511 CC module/bdev/ftl/bdev_ftl_rpc.o 00:12:26.511 CC module/bdev/iscsi/bdev_iscsi.o 00:12:26.511 LIB libspdk_bdev_split.a 00:12:26.511 LIB libspdk_bdev_raid.a 00:12:26.511 CC module/bdev/virtio/bdev_virtio_scsi.o 00:12:26.511 SO libspdk_bdev_split.so.6.0 00:12:26.511 SO libspdk_bdev_raid.so.6.0 00:12:26.511 SYMLINK libspdk_bdev_split.so 00:12:26.511 CC module/bdev/virtio/bdev_virtio_blk.o 00:12:26.511 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:12:26.768 CC module/bdev/aio/bdev_aio_rpc.o 00:12:26.768 LIB libspdk_bdev_ftl.a 00:12:26.768 SYMLINK libspdk_bdev_raid.so 00:12:26.768 CC module/bdev/uring/bdev_uring_rpc.o 00:12:26.768 CC module/bdev/virtio/bdev_virtio_rpc.o 00:12:26.768 SO libspdk_bdev_ftl.so.6.0 00:12:26.768 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:12:26.768 LIB libspdk_bdev_zone_block.a 00:12:26.768 SYMLINK libspdk_bdev_ftl.so 00:12:26.768 LIB libspdk_bdev_aio.a 00:12:26.768 SO libspdk_bdev_zone_block.so.6.0 00:12:26.768 SO libspdk_bdev_aio.so.6.0 00:12:26.768 LIB libspdk_bdev_uring.a 00:12:27.026 SO libspdk_bdev_uring.so.6.0 00:12:27.026 SYMLINK libspdk_bdev_zone_block.so 00:12:27.026 LIB libspdk_bdev_iscsi.a 00:12:27.026 SYMLINK libspdk_bdev_aio.so 00:12:27.026 SO libspdk_bdev_iscsi.so.6.0 00:12:27.027 SYMLINK libspdk_bdev_uring.so 00:12:27.027 SYMLINK libspdk_bdev_iscsi.so 00:12:27.284 LIB libspdk_bdev_virtio.a 00:12:27.284 SO libspdk_bdev_virtio.so.6.0 00:12:27.284 SYMLINK libspdk_bdev_virtio.so 00:12:27.850 LIB libspdk_bdev_nvme.a 00:12:28.119 SO libspdk_bdev_nvme.so.7.0 00:12:28.119 SYMLINK libspdk_bdev_nvme.so 00:12:28.687 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:12:28.687 CC module/event/subsystems/vmd/vmd.o 00:12:28.687 CC module/event/subsystems/vmd/vmd_rpc.o 00:12:28.687 CC module/event/subsystems/sock/sock.o 00:12:28.687 CC module/event/subsystems/iobuf/iobuf.o 00:12:28.687 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:12:28.687 CC module/event/subsystems/scheduler/scheduler.o 00:12:28.687 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:12:28.687 CC module/event/subsystems/keyring/keyring.o 00:12:28.944 LIB libspdk_event_vhost_blk.a 00:12:28.944 LIB libspdk_event_scheduler.a 00:12:28.944 LIB libspdk_event_sock.a 00:12:28.944 LIB libspdk_event_vmd.a 00:12:28.944 LIB libspdk_event_keyring.a 00:12:28.944 SO libspdk_event_sock.so.5.0 00:12:28.944 SO libspdk_event_scheduler.so.4.0 00:12:28.944 SO libspdk_event_vhost_blk.so.3.0 00:12:28.944 LIB libspdk_event_vfu_tgt.a 00:12:28.944 LIB libspdk_event_iobuf.a 00:12:28.944 SO libspdk_event_vmd.so.6.0 00:12:28.944 SO libspdk_event_keyring.so.1.0 00:12:28.944 SO libspdk_event_vfu_tgt.so.3.0 00:12:28.944 SO libspdk_event_iobuf.so.3.0 00:12:28.944 SYMLINK libspdk_event_sock.so 00:12:28.944 SYMLINK libspdk_event_scheduler.so 00:12:28.944 SYMLINK libspdk_event_keyring.so 00:12:28.944 SYMLINK libspdk_event_vhost_blk.so 00:12:28.944 SYMLINK libspdk_event_vmd.so 00:12:28.944 SYMLINK libspdk_event_vfu_tgt.so 00:12:29.201 SYMLINK libspdk_event_iobuf.so 00:12:29.461 CC module/event/subsystems/accel/accel.o 00:12:29.461 LIB libspdk_event_accel.a 00:12:29.719 SO libspdk_event_accel.so.6.0 00:12:29.719 SYMLINK libspdk_event_accel.so 00:12:29.976 CC module/event/subsystems/bdev/bdev.o 00:12:30.235 LIB libspdk_event_bdev.a 00:12:30.235 SO libspdk_event_bdev.so.6.0 00:12:30.235 SYMLINK libspdk_event_bdev.so 00:12:30.492 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:12:30.492 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:12:30.492 CC module/event/subsystems/nbd/nbd.o 00:12:30.492 CC module/event/subsystems/scsi/scsi.o 00:12:30.751 CC module/event/subsystems/ublk/ublk.o 00:12:30.751 LIB libspdk_event_scsi.a 00:12:30.751 LIB libspdk_event_nbd.a 00:12:30.751 SO libspdk_event_scsi.so.6.0 00:12:30.751 LIB libspdk_event_ublk.a 00:12:30.751 SO libspdk_event_nbd.so.6.0 00:12:30.751 SO libspdk_event_ublk.so.3.0 00:12:31.009 SYMLINK libspdk_event_scsi.so 00:12:31.009 SYMLINK libspdk_event_nbd.so 00:12:31.009 SYMLINK libspdk_event_ublk.so 00:12:31.009 LIB libspdk_event_nvmf.a 00:12:31.009 SO libspdk_event_nvmf.so.6.0 00:12:31.266 SYMLINK libspdk_event_nvmf.so 00:12:31.266 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:12:31.266 CC module/event/subsystems/iscsi/iscsi.o 00:12:31.266 LIB libspdk_event_vhost_scsi.a 00:12:31.545 LIB libspdk_event_iscsi.a 00:12:31.545 SO libspdk_event_vhost_scsi.so.3.0 00:12:31.545 SO libspdk_event_iscsi.so.6.0 00:12:31.545 SYMLINK libspdk_event_vhost_scsi.so 00:12:31.545 SYMLINK libspdk_event_iscsi.so 00:12:31.808 SO libspdk.so.6.0 00:12:31.808 SYMLINK libspdk.so 00:12:32.066 CXX app/trace/trace.o 00:12:32.066 CC test/rpc_client/rpc_client_test.o 00:12:32.066 TEST_HEADER include/spdk/accel.h 00:12:32.066 CC examples/interrupt_tgt/interrupt_tgt.o 00:12:32.066 TEST_HEADER include/spdk/accel_module.h 00:12:32.066 TEST_HEADER include/spdk/assert.h 00:12:32.066 TEST_HEADER include/spdk/barrier.h 00:12:32.066 TEST_HEADER include/spdk/base64.h 00:12:32.066 TEST_HEADER include/spdk/bdev.h 00:12:32.066 TEST_HEADER include/spdk/bdev_module.h 00:12:32.066 TEST_HEADER include/spdk/bdev_zone.h 00:12:32.066 TEST_HEADER include/spdk/bit_array.h 00:12:32.066 TEST_HEADER include/spdk/bit_pool.h 00:12:32.066 TEST_HEADER include/spdk/blob_bdev.h 00:12:32.066 TEST_HEADER include/spdk/blobfs_bdev.h 00:12:32.066 TEST_HEADER include/spdk/blobfs.h 00:12:32.066 TEST_HEADER include/spdk/blob.h 00:12:32.066 TEST_HEADER include/spdk/conf.h 00:12:32.066 TEST_HEADER include/spdk/config.h 00:12:32.066 TEST_HEADER include/spdk/cpuset.h 00:12:32.066 TEST_HEADER include/spdk/crc16.h 00:12:32.066 TEST_HEADER include/spdk/crc32.h 00:12:32.066 TEST_HEADER include/spdk/crc64.h 00:12:32.066 TEST_HEADER include/spdk/dif.h 00:12:32.066 TEST_HEADER include/spdk/dma.h 00:12:32.066 TEST_HEADER include/spdk/endian.h 00:12:32.066 TEST_HEADER include/spdk/env_dpdk.h 00:12:32.066 TEST_HEADER include/spdk/env.h 00:12:32.066 TEST_HEADER include/spdk/event.h 00:12:32.066 CC test/thread/poller_perf/poller_perf.o 00:12:32.066 TEST_HEADER include/spdk/fd_group.h 00:12:32.066 TEST_HEADER include/spdk/fd.h 00:12:32.066 TEST_HEADER include/spdk/file.h 00:12:32.066 TEST_HEADER include/spdk/ftl.h 00:12:32.066 CC examples/ioat/perf/perf.o 00:12:32.066 TEST_HEADER include/spdk/gpt_spec.h 00:12:32.066 CC examples/util/zipf/zipf.o 00:12:32.066 TEST_HEADER include/spdk/hexlify.h 00:12:32.066 TEST_HEADER include/spdk/histogram_data.h 00:12:32.066 TEST_HEADER include/spdk/idxd.h 00:12:32.066 TEST_HEADER include/spdk/idxd_spec.h 00:12:32.066 TEST_HEADER include/spdk/init.h 00:12:32.066 TEST_HEADER include/spdk/ioat.h 00:12:32.066 TEST_HEADER include/spdk/ioat_spec.h 00:12:32.066 TEST_HEADER include/spdk/iscsi_spec.h 00:12:32.066 TEST_HEADER include/spdk/json.h 00:12:32.066 CC test/dma/test_dma/test_dma.o 00:12:32.066 TEST_HEADER include/spdk/jsonrpc.h 00:12:32.066 TEST_HEADER include/spdk/keyring.h 00:12:32.066 CC test/app/bdev_svc/bdev_svc.o 00:12:32.324 TEST_HEADER include/spdk/keyring_module.h 00:12:32.324 TEST_HEADER include/spdk/likely.h 00:12:32.324 TEST_HEADER include/spdk/log.h 00:12:32.324 TEST_HEADER include/spdk/lvol.h 00:12:32.324 TEST_HEADER include/spdk/memory.h 00:12:32.324 TEST_HEADER include/spdk/mmio.h 00:12:32.324 TEST_HEADER include/spdk/nbd.h 00:12:32.324 TEST_HEADER include/spdk/net.h 00:12:32.324 TEST_HEADER include/spdk/notify.h 00:12:32.324 TEST_HEADER include/spdk/nvme.h 00:12:32.324 TEST_HEADER include/spdk/nvme_intel.h 00:12:32.324 TEST_HEADER include/spdk/nvme_ocssd.h 00:12:32.324 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:12:32.324 TEST_HEADER include/spdk/nvme_spec.h 00:12:32.324 TEST_HEADER include/spdk/nvme_zns.h 00:12:32.324 TEST_HEADER include/spdk/nvmf_cmd.h 00:12:32.324 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:12:32.324 TEST_HEADER include/spdk/nvmf.h 00:12:32.324 TEST_HEADER include/spdk/nvmf_spec.h 00:12:32.324 TEST_HEADER include/spdk/nvmf_transport.h 00:12:32.324 TEST_HEADER include/spdk/opal.h 00:12:32.324 TEST_HEADER include/spdk/opal_spec.h 00:12:32.324 CC test/env/mem_callbacks/mem_callbacks.o 00:12:32.324 TEST_HEADER include/spdk/pci_ids.h 00:12:32.324 LINK rpc_client_test 00:12:32.324 TEST_HEADER include/spdk/pipe.h 00:12:32.324 LINK interrupt_tgt 00:12:32.324 TEST_HEADER include/spdk/queue.h 00:12:32.324 LINK zipf 00:12:32.324 TEST_HEADER include/spdk/reduce.h 00:12:32.324 TEST_HEADER include/spdk/rpc.h 00:12:32.324 TEST_HEADER include/spdk/scheduler.h 00:12:32.324 TEST_HEADER include/spdk/scsi.h 00:12:32.324 TEST_HEADER include/spdk/scsi_spec.h 00:12:32.324 TEST_HEADER include/spdk/sock.h 00:12:32.324 LINK poller_perf 00:12:32.324 TEST_HEADER include/spdk/stdinc.h 00:12:32.324 TEST_HEADER include/spdk/string.h 00:12:32.324 TEST_HEADER include/spdk/thread.h 00:12:32.324 TEST_HEADER include/spdk/trace.h 00:12:32.324 TEST_HEADER include/spdk/trace_parser.h 00:12:32.324 TEST_HEADER include/spdk/tree.h 00:12:32.324 TEST_HEADER include/spdk/ublk.h 00:12:32.324 TEST_HEADER include/spdk/util.h 00:12:32.324 TEST_HEADER include/spdk/uuid.h 00:12:32.324 TEST_HEADER include/spdk/version.h 00:12:32.324 TEST_HEADER include/spdk/vfio_user_pci.h 00:12:32.324 TEST_HEADER include/spdk/vfio_user_spec.h 00:12:32.324 TEST_HEADER include/spdk/vhost.h 00:12:32.324 TEST_HEADER include/spdk/vmd.h 00:12:32.324 TEST_HEADER include/spdk/xor.h 00:12:32.324 TEST_HEADER include/spdk/zipf.h 00:12:32.324 CXX test/cpp_headers/accel.o 00:12:32.324 LINK ioat_perf 00:12:32.582 LINK bdev_svc 00:12:32.582 LINK spdk_trace 00:12:32.582 CC test/env/vtophys/vtophys.o 00:12:32.582 CC examples/ioat/verify/verify.o 00:12:32.582 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:12:32.582 CXX test/cpp_headers/accel_module.o 00:12:32.582 CC test/env/memory/memory_ut.o 00:12:32.840 CC test/env/pci/pci_ut.o 00:12:32.840 LINK vtophys 00:12:32.840 LINK test_dma 00:12:32.840 CXX test/cpp_headers/assert.o 00:12:32.840 CC app/trace_record/trace_record.o 00:12:32.840 LINK env_dpdk_post_init 00:12:32.840 LINK verify 00:12:32.840 LINK mem_callbacks 00:12:32.840 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:12:33.097 CXX test/cpp_headers/barrier.o 00:12:33.097 LINK spdk_trace_record 00:12:33.097 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:12:33.097 CC examples/thread/thread/thread_ex.o 00:12:33.354 CC examples/sock/hello_world/hello_sock.o 00:12:33.354 LINK pci_ut 00:12:33.354 CC examples/vmd/lsvmd/lsvmd.o 00:12:33.354 CXX test/cpp_headers/base64.o 00:12:33.354 CC test/event/event_perf/event_perf.o 00:12:33.354 LINK lsvmd 00:12:33.354 LINK nvme_fuzz 00:12:33.354 CXX test/cpp_headers/bdev.o 00:12:33.354 LINK event_perf 00:12:33.614 CC app/nvmf_tgt/nvmf_main.o 00:12:33.614 LINK thread 00:12:33.614 LINK hello_sock 00:12:33.614 CXX test/cpp_headers/bdev_module.o 00:12:33.614 CC examples/vmd/led/led.o 00:12:33.614 CC app/iscsi_tgt/iscsi_tgt.o 00:12:33.614 LINK nvmf_tgt 00:12:33.872 CC test/event/reactor/reactor.o 00:12:33.872 CC app/spdk_tgt/spdk_tgt.o 00:12:33.872 LINK led 00:12:33.872 CC app/spdk_lspci/spdk_lspci.o 00:12:33.872 CXX test/cpp_headers/bdev_zone.o 00:12:33.872 LINK memory_ut 00:12:33.872 LINK reactor 00:12:33.872 CC examples/idxd/perf/perf.o 00:12:33.872 CXX test/cpp_headers/bit_array.o 00:12:33.872 LINK iscsi_tgt 00:12:34.130 LINK spdk_lspci 00:12:34.130 LINK spdk_tgt 00:12:34.130 CXX test/cpp_headers/bit_pool.o 00:12:34.130 CC app/spdk_nvme_perf/perf.o 00:12:34.130 CC test/event/reactor_perf/reactor_perf.o 00:12:34.412 CC test/event/app_repeat/app_repeat.o 00:12:34.412 CC test/nvme/aer/aer.o 00:12:34.412 CXX test/cpp_headers/blob_bdev.o 00:12:34.412 LINK idxd_perf 00:12:34.412 CC app/spdk_nvme_identify/identify.o 00:12:34.412 LINK reactor_perf 00:12:34.413 CC test/event/scheduler/scheduler.o 00:12:34.413 CC app/spdk_nvme_discover/discovery_aer.o 00:12:34.413 LINK app_repeat 00:12:34.671 CXX test/cpp_headers/blobfs_bdev.o 00:12:34.671 CXX test/cpp_headers/blobfs.o 00:12:34.671 LINK scheduler 00:12:34.671 LINK spdk_nvme_discover 00:12:34.671 LINK aer 00:12:34.671 CC examples/nvme/hello_world/hello_world.o 00:12:34.930 CXX test/cpp_headers/blob.o 00:12:34.930 CC app/spdk_top/spdk_top.o 00:12:34.930 CXX test/cpp_headers/conf.o 00:12:34.930 CC app/vhost/vhost.o 00:12:35.188 LINK hello_world 00:12:35.188 CC examples/nvme/reconnect/reconnect.o 00:12:35.188 CC test/nvme/reset/reset.o 00:12:35.188 CC examples/nvme/nvme_manage/nvme_manage.o 00:12:35.188 LINK vhost 00:12:35.188 CXX test/cpp_headers/config.o 00:12:35.188 CXX test/cpp_headers/cpuset.o 00:12:35.446 CC app/spdk_dd/spdk_dd.o 00:12:35.446 LINK spdk_nvme_perf 00:12:35.446 LINK iscsi_fuzz 00:12:35.446 CXX test/cpp_headers/crc16.o 00:12:35.446 LINK reset 00:12:35.446 LINK reconnect 00:12:35.446 LINK spdk_nvme_identify 00:12:35.704 CC examples/nvme/arbitration/arbitration.o 00:12:35.704 CXX test/cpp_headers/crc32.o 00:12:35.704 CC examples/nvme/hotplug/hotplug.o 00:12:35.704 CXX test/cpp_headers/crc64.o 00:12:35.962 CC test/nvme/sgl/sgl.o 00:12:35.962 LINK nvme_manage 00:12:35.962 CC examples/nvme/cmb_copy/cmb_copy.o 00:12:35.962 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:12:35.962 CC examples/nvme/abort/abort.o 00:12:35.962 LINK spdk_top 00:12:35.963 CXX test/cpp_headers/dif.o 00:12:35.963 LINK arbitration 00:12:35.963 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:12:35.963 LINK spdk_dd 00:12:35.963 CXX test/cpp_headers/dma.o 00:12:36.221 LINK cmb_copy 00:12:36.221 LINK hotplug 00:12:36.221 LINK sgl 00:12:36.221 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:12:36.221 CXX test/cpp_headers/endian.o 00:12:36.221 CXX test/cpp_headers/env_dpdk.o 00:12:36.221 CXX test/cpp_headers/env.o 00:12:36.501 LINK abort 00:12:36.501 CC test/accel/dif/dif.o 00:12:36.501 CC test/blobfs/mkfs/mkfs.o 00:12:36.501 LINK pmr_persistence 00:12:36.501 CC test/nvme/e2edp/nvme_dp.o 00:12:36.501 CXX test/cpp_headers/event.o 00:12:36.501 CC app/fio/nvme/fio_plugin.o 00:12:36.501 CC test/nvme/overhead/overhead.o 00:12:36.501 LINK vhost_fuzz 00:12:36.760 CXX test/cpp_headers/fd_group.o 00:12:36.760 CC examples/accel/perf/accel_perf.o 00:12:36.760 LINK mkfs 00:12:36.760 LINK nvme_dp 00:12:36.760 CC test/nvme/err_injection/err_injection.o 00:12:36.761 CXX test/cpp_headers/fd.o 00:12:37.018 CC examples/blob/hello_world/hello_blob.o 00:12:37.019 CC test/app/histogram_perf/histogram_perf.o 00:12:37.019 LINK overhead 00:12:37.019 CXX test/cpp_headers/file.o 00:12:37.019 LINK err_injection 00:12:37.019 CXX test/cpp_headers/ftl.o 00:12:37.019 LINK histogram_perf 00:12:37.019 CXX test/cpp_headers/gpt_spec.o 00:12:37.019 LINK dif 00:12:37.277 LINK hello_blob 00:12:37.277 CC app/fio/bdev/fio_plugin.o 00:12:37.277 CXX test/cpp_headers/hexlify.o 00:12:37.277 CXX test/cpp_headers/histogram_data.o 00:12:37.277 LINK spdk_nvme 00:12:37.277 LINK accel_perf 00:12:37.277 CC test/nvme/startup/startup.o 00:12:37.277 CXX test/cpp_headers/idxd.o 00:12:37.277 CXX test/cpp_headers/idxd_spec.o 00:12:37.535 CC test/app/jsoncat/jsoncat.o 00:12:37.535 CC test/app/stub/stub.o 00:12:37.535 LINK startup 00:12:37.535 CXX test/cpp_headers/init.o 00:12:37.535 LINK jsoncat 00:12:37.535 CC examples/blob/cli/blobcli.o 00:12:37.535 CC test/lvol/esnap/esnap.o 00:12:37.535 CC test/nvme/reserve/reserve.o 00:12:37.793 CC test/bdev/bdevio/bdevio.o 00:12:37.793 CC test/nvme/simple_copy/simple_copy.o 00:12:37.793 LINK stub 00:12:37.793 CXX test/cpp_headers/ioat.o 00:12:37.793 LINK spdk_bdev 00:12:37.793 CC test/nvme/connect_stress/connect_stress.o 00:12:37.793 CC test/nvme/boot_partition/boot_partition.o 00:12:37.793 LINK reserve 00:12:38.050 CXX test/cpp_headers/ioat_spec.o 00:12:38.050 LINK simple_copy 00:12:38.050 CC test/nvme/compliance/nvme_compliance.o 00:12:38.050 CC test/nvme/fused_ordering/fused_ordering.o 00:12:38.050 LINK connect_stress 00:12:38.050 LINK boot_partition 00:12:38.050 CXX test/cpp_headers/iscsi_spec.o 00:12:38.050 LINK bdevio 00:12:38.050 LINK blobcli 00:12:38.308 CXX test/cpp_headers/json.o 00:12:38.308 CC test/nvme/doorbell_aers/doorbell_aers.o 00:12:38.308 LINK fused_ordering 00:12:38.308 CXX test/cpp_headers/jsonrpc.o 00:12:38.308 CXX test/cpp_headers/keyring.o 00:12:38.565 LINK doorbell_aers 00:12:38.565 CC test/nvme/fdp/fdp.o 00:12:38.565 LINK nvme_compliance 00:12:38.565 CXX test/cpp_headers/keyring_module.o 00:12:38.565 CC examples/bdev/hello_world/hello_bdev.o 00:12:38.565 CXX test/cpp_headers/likely.o 00:12:38.565 CXX test/cpp_headers/log.o 00:12:38.565 CC test/nvme/cuse/cuse.o 00:12:38.565 CXX test/cpp_headers/lvol.o 00:12:38.565 CC examples/bdev/bdevperf/bdevperf.o 00:12:38.823 CXX test/cpp_headers/memory.o 00:12:38.823 CXX test/cpp_headers/mmio.o 00:12:38.823 CXX test/cpp_headers/nbd.o 00:12:38.823 CXX test/cpp_headers/net.o 00:12:38.823 LINK hello_bdev 00:12:38.823 CXX test/cpp_headers/notify.o 00:12:38.823 CXX test/cpp_headers/nvme.o 00:12:38.823 LINK fdp 00:12:38.823 CXX test/cpp_headers/nvme_intel.o 00:12:38.823 CXX test/cpp_headers/nvme_ocssd.o 00:12:38.823 CXX test/cpp_headers/nvme_ocssd_spec.o 00:12:39.081 CXX test/cpp_headers/nvme_spec.o 00:12:39.081 CXX test/cpp_headers/nvme_zns.o 00:12:39.081 CXX test/cpp_headers/nvmf_cmd.o 00:12:39.081 CXX test/cpp_headers/nvmf_fc_spec.o 00:12:39.081 CXX test/cpp_headers/nvmf.o 00:12:39.081 CXX test/cpp_headers/nvmf_spec.o 00:12:39.081 CXX test/cpp_headers/nvmf_transport.o 00:12:39.338 CXX test/cpp_headers/opal.o 00:12:39.338 CXX test/cpp_headers/opal_spec.o 00:12:39.338 CXX test/cpp_headers/pci_ids.o 00:12:39.338 CXX test/cpp_headers/pipe.o 00:12:39.338 CXX test/cpp_headers/queue.o 00:12:39.338 CXX test/cpp_headers/reduce.o 00:12:39.338 CXX test/cpp_headers/rpc.o 00:12:39.338 CXX test/cpp_headers/scheduler.o 00:12:39.338 CXX test/cpp_headers/scsi.o 00:12:39.338 CXX test/cpp_headers/scsi_spec.o 00:12:39.338 CXX test/cpp_headers/sock.o 00:12:39.338 CXX test/cpp_headers/stdinc.o 00:12:39.596 CXX test/cpp_headers/string.o 00:12:39.596 CXX test/cpp_headers/thread.o 00:12:39.596 CXX test/cpp_headers/trace.o 00:12:39.596 CXX test/cpp_headers/trace_parser.o 00:12:39.596 CXX test/cpp_headers/tree.o 00:12:39.596 CXX test/cpp_headers/ublk.o 00:12:39.596 CXX test/cpp_headers/util.o 00:12:39.596 CXX test/cpp_headers/uuid.o 00:12:39.596 CXX test/cpp_headers/version.o 00:12:39.596 CXX test/cpp_headers/vfio_user_pci.o 00:12:39.854 CXX test/cpp_headers/vfio_user_spec.o 00:12:39.854 CXX test/cpp_headers/vhost.o 00:12:39.854 LINK bdevperf 00:12:39.854 CXX test/cpp_headers/vmd.o 00:12:39.854 CXX test/cpp_headers/xor.o 00:12:39.854 CXX test/cpp_headers/zipf.o 00:12:40.421 LINK cuse 00:12:40.421 CC examples/nvmf/nvmf/nvmf.o 00:12:40.679 LINK nvmf 00:12:43.962 LINK esnap 00:12:44.528 00:12:44.528 real 1m16.148s 00:12:44.528 user 6m59.257s 00:12:44.528 sys 1m55.939s 00:12:44.528 16:50:46 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:12:44.528 ************************************ 00:12:44.528 END TEST make 00:12:44.528 ************************************ 00:12:44.528 16:50:46 make -- common/autotest_common.sh@10 -- $ set +x 00:12:44.528 16:50:46 -- common/autotest_common.sh@1142 -- $ return 0 00:12:44.528 16:50:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:12:44.528 16:50:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:12:44.528 16:50:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:12:44.528 16:50:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:44.528 16:50:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:12:44.528 16:50:46 -- pm/common@44 -- $ pid=5200 00:12:44.528 16:50:46 -- pm/common@50 -- $ kill -TERM 5200 00:12:44.528 16:50:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:44.528 16:50:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:12:44.528 16:50:46 -- pm/common@44 -- $ pid=5202 00:12:44.528 16:50:46 -- pm/common@50 -- $ kill -TERM 5202 00:12:44.787 16:50:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:44.787 16:50:46 -- nvmf/common.sh@7 -- # uname -s 00:12:44.787 16:50:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.787 16:50:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.787 16:50:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.787 16:50:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.787 16:50:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.787 16:50:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.787 16:50:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.787 16:50:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.787 16:50:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.787 16:50:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.787 16:50:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:12:44.787 16:50:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:12:44.787 16:50:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.787 16:50:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.787 16:50:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:44.787 16:50:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.787 16:50:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:44.787 16:50:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.787 16:50:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.787 16:50:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.787 16:50:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 16:50:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 16:50:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 16:50:46 -- paths/export.sh@5 -- # export PATH 00:12:44.787 16:50:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 16:50:46 -- nvmf/common.sh@47 -- # : 0 00:12:44.787 16:50:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.787 16:50:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.787 16:50:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.787 16:50:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.787 16:50:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.787 16:50:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.787 16:50:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.787 16:50:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.787 16:50:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:12:44.787 16:50:46 -- spdk/autotest.sh@32 -- # uname -s 00:12:44.787 16:50:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:12:44.787 16:50:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:12:44.787 16:50:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:44.787 16:50:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:12:44.787 16:50:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:44.787 16:50:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:12:44.787 16:50:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:12:44.787 16:50:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:12:44.787 16:50:46 -- spdk/autotest.sh@48 -- # udevadm_pid=53520 00:12:44.787 16:50:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:12:44.787 16:50:46 -- pm/common@17 -- # local monitor 00:12:44.787 16:50:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:44.787 16:50:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:12:44.787 16:50:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:44.787 16:50:46 -- pm/common@21 -- # date +%s 00:12:44.787 16:50:46 -- pm/common@25 -- # sleep 1 00:12:44.787 16:50:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721667046 00:12:44.787 16:50:46 -- pm/common@21 -- # date +%s 00:12:44.787 16:50:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721667046 00:12:44.787 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721667046_collect-cpu-load.pm.log 00:12:44.787 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721667046_collect-vmstat.pm.log 00:12:45.738 16:50:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:12:45.738 16:50:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:12:45.738 16:50:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:45.738 16:50:47 -- common/autotest_common.sh@10 -- # set +x 00:12:45.738 16:50:47 -- spdk/autotest.sh@59 -- # create_test_list 00:12:45.738 16:50:47 -- common/autotest_common.sh@746 -- # xtrace_disable 00:12:45.738 16:50:47 -- common/autotest_common.sh@10 -- # set +x 00:12:45.738 16:50:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:12:45.738 16:50:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:12:45.738 16:50:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:12:45.738 16:50:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:12:45.738 16:50:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:12:45.738 16:50:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:12:45.738 16:50:47 -- common/autotest_common.sh@1455 -- # uname 00:12:45.738 16:50:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:12:45.738 16:50:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:12:45.738 16:50:47 -- common/autotest_common.sh@1475 -- # uname 00:12:45.738 16:50:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:12:45.738 16:50:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:12:45.738 16:50:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:12:45.738 16:50:47 -- spdk/autotest.sh@72 -- # hash lcov 00:12:45.738 16:50:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:12:45.738 16:50:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:12:45.738 --rc lcov_branch_coverage=1 00:12:45.738 --rc lcov_function_coverage=1 00:12:45.738 --rc genhtml_branch_coverage=1 00:12:45.738 --rc genhtml_function_coverage=1 00:12:45.738 --rc genhtml_legend=1 00:12:45.738 --rc geninfo_all_blocks=1 00:12:45.738 ' 00:12:45.738 16:50:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:12:45.738 --rc lcov_branch_coverage=1 00:12:45.738 --rc lcov_function_coverage=1 00:12:45.738 --rc genhtml_branch_coverage=1 00:12:45.738 --rc genhtml_function_coverage=1 00:12:45.738 --rc genhtml_legend=1 00:12:45.738 --rc geninfo_all_blocks=1 00:12:45.738 ' 00:12:45.738 16:50:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:12:45.738 --rc lcov_branch_coverage=1 00:12:45.738 --rc lcov_function_coverage=1 00:12:45.738 --rc genhtml_branch_coverage=1 00:12:45.738 --rc genhtml_function_coverage=1 00:12:45.738 --rc genhtml_legend=1 00:12:45.738 --rc geninfo_all_blocks=1 00:12:45.738 --no-external' 00:12:45.738 16:50:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:12:45.738 --rc lcov_branch_coverage=1 00:12:45.739 --rc lcov_function_coverage=1 00:12:45.739 --rc genhtml_branch_coverage=1 00:12:45.739 --rc genhtml_function_coverage=1 00:12:45.739 --rc genhtml_legend=1 00:12:45.739 --rc geninfo_all_blocks=1 00:12:45.739 --no-external' 00:12:45.739 16:50:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:12:45.996 lcov: LCOV version 1.14 00:12:45.996 16:50:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:13:00.921 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:13:00.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:13:15.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:13:15.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:13:15.818 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:13:15.818 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:13:18.382 16:51:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:13:18.382 16:51:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.382 16:51:19 -- common/autotest_common.sh@10 -- # set +x 00:13:18.382 16:51:19 -- spdk/autotest.sh@91 -- # rm -f 00:13:18.382 16:51:19 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:18.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:19.220 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:13:19.221 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:13:19.221 16:51:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:13:19.221 16:51:20 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:13:19.221 16:51:20 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:13:19.221 16:51:20 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:13:19.221 16:51:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:19.221 16:51:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:13:19.221 16:51:20 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:13:19.221 16:51:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:19.221 16:51:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:19.221 16:51:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:19.221 16:51:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:13:19.221 16:51:20 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:13:19.221 16:51:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:19.221 16:51:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:19.221 16:51:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:19.221 16:51:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:13:19.221 16:51:20 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:13:19.221 16:51:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:19.221 16:51:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:19.221 16:51:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:19.221 16:51:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:13:19.221 16:51:20 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:13:19.221 16:51:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:19.221 16:51:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:19.221 16:51:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:13:19.221 16:51:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:19.221 16:51:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:19.221 16:51:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:13:19.221 16:51:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:13:19.221 16:51:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:19.221 No valid GPT data, bailing 00:13:19.221 16:51:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:19.221 16:51:20 -- scripts/common.sh@391 -- # pt= 00:13:19.221 16:51:20 -- scripts/common.sh@392 -- # return 1 00:13:19.221 16:51:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:13:19.221 1+0 records in 00:13:19.221 1+0 records out 00:13:19.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564992 s, 186 MB/s 00:13:19.221 16:51:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:19.221 16:51:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:19.221 16:51:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:13:19.221 16:51:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:13:19.221 16:51:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:13:19.221 No valid GPT data, bailing 00:13:19.221 16:51:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:19.221 16:51:20 -- scripts/common.sh@391 -- # pt= 00:13:19.221 16:51:20 -- scripts/common.sh@392 -- # return 1 00:13:19.221 16:51:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:13:19.479 1+0 records in 00:13:19.479 1+0 records out 00:13:19.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423054 s, 248 MB/s 00:13:19.479 16:51:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:19.479 16:51:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:19.479 16:51:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:13:19.479 16:51:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:13:19.479 16:51:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:13:19.479 No valid GPT data, bailing 00:13:19.479 16:51:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:13:19.479 16:51:20 -- scripts/common.sh@391 -- # pt= 00:13:19.479 16:51:20 -- scripts/common.sh@392 -- # return 1 00:13:19.479 16:51:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:13:19.479 1+0 records in 00:13:19.479 1+0 records out 00:13:19.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515136 s, 204 MB/s 00:13:19.479 16:51:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:19.479 16:51:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:19.479 16:51:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:13:19.479 16:51:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:13:19.479 16:51:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:13:19.479 No valid GPT data, bailing 00:13:19.479 16:51:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:13:19.479 16:51:20 -- scripts/common.sh@391 -- # pt= 00:13:19.479 16:51:20 -- scripts/common.sh@392 -- # return 1 00:13:19.479 16:51:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:13:19.479 1+0 records in 00:13:19.479 1+0 records out 00:13:19.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413056 s, 254 MB/s 00:13:19.479 16:51:20 -- spdk/autotest.sh@118 -- # sync 00:13:19.480 16:51:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:13:19.480 16:51:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:13:19.480 16:51:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:13:22.033 16:51:23 -- spdk/autotest.sh@124 -- # uname -s 00:13:22.033 16:51:23 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:13:22.033 16:51:23 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:13:22.033 16:51:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:22.033 16:51:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.033 16:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:22.033 ************************************ 00:13:22.033 START TEST setup.sh 00:13:22.033 ************************************ 00:13:22.033 16:51:23 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:13:22.033 * Looking for test storage... 00:13:22.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:22.033 16:51:23 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:13:22.033 16:51:23 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:13:22.033 16:51:23 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:13:22.033 16:51:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:22.033 16:51:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.033 16:51:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:22.033 ************************************ 00:13:22.033 START TEST acl 00:13:22.033 ************************************ 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:13:22.033 * Looking for test storage... 00:13:22.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:22.033 16:51:23 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:22.033 16:51:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:22.033 16:51:23 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:13:22.033 16:51:23 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:13:22.033 16:51:23 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:13:22.033 16:51:23 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:13:22.033 16:51:23 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:13:22.033 16:51:23 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:22.033 16:51:23 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:22.600 16:51:24 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:13:22.600 16:51:24 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:13:22.600 16:51:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:22.600 16:51:24 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:13:22.600 16:51:24 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:13:22.600 16:51:24 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:23.532 Hugepages 00:13:23.532 node hugesize free / total 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:23.532 00:13:23.532 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:13:23.532 16:51:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:13:23.532 16:51:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:13:23.791 16:51:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:13:23.791 16:51:25 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:23.791 16:51:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.791 16:51:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:13:23.791 ************************************ 00:13:23.791 START TEST denied 00:13:23.791 ************************************ 00:13:23.791 16:51:25 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:13:23.791 16:51:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:13:23.791 16:51:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:13:23.791 16:51:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:13:23.791 16:51:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:13:23.791 16:51:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:24.724 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:24.724 16:51:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:25.291 00:13:25.291 real 0m1.613s 00:13:25.291 user 0m0.607s 00:13:25.291 sys 0m0.940s 00:13:25.291 16:51:26 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.291 ************************************ 00:13:25.291 16:51:26 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:13:25.291 END TEST denied 00:13:25.291 ************************************ 00:13:25.291 16:51:26 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:13:25.291 16:51:26 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:13:25.291 16:51:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:25.291 16:51:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.291 16:51:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:13:25.291 ************************************ 00:13:25.291 START TEST allowed 00:13:25.291 ************************************ 00:13:25.291 16:51:26 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:13:25.291 16:51:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:13:25.291 16:51:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:13:25.291 16:51:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:13:25.291 16:51:26 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:25.291 16:51:26 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:13:26.226 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:26.226 16:51:27 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:27.160 00:13:27.160 real 0m1.755s 00:13:27.160 user 0m0.732s 00:13:27.160 sys 0m1.026s 00:13:27.160 16:51:28 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.160 16:51:28 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:13:27.160 ************************************ 00:13:27.160 END TEST allowed 00:13:27.160 ************************************ 00:13:27.160 16:51:28 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:13:27.161 00:13:27.161 real 0m5.391s 00:13:27.161 user 0m2.269s 00:13:27.161 sys 0m3.092s 00:13:27.161 16:51:28 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.161 16:51:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:13:27.161 ************************************ 00:13:27.161 END TEST acl 00:13:27.161 ************************************ 00:13:27.161 16:51:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:13:27.161 16:51:28 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:13:27.161 16:51:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:27.161 16:51:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.161 16:51:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:27.161 ************************************ 00:13:27.161 START TEST hugepages 00:13:27.161 ************************************ 00:13:27.161 16:51:28 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:13:27.420 * Looking for test storage... 00:13:27.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5809124 kB' 'MemAvailable: 7396724 kB' 'Buffers: 2436 kB' 'Cached: 1801324 kB' 'SwapCached: 0 kB' 'Active: 437412 kB' 'Inactive: 1473220 kB' 'Active(anon): 117360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 108468 kB' 'Mapped: 48752 kB' 'Shmem: 10488 kB' 'KReclaimable: 62536 kB' 'Slab: 136880 kB' 'SReclaimable: 62536 kB' 'SUnreclaim: 74344 kB' 'KernelStack: 6576 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 337400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.420 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:13:27.421 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:27.422 16:51:28 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:13:27.422 16:51:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:27.422 16:51:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.422 16:51:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:27.422 ************************************ 00:13:27.422 START TEST default_setup 00:13:27.422 ************************************ 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:13:27.422 16:51:28 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:28.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.374 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.374 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906376 kB' 'MemAvailable: 9493848 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451576 kB' 'Inactive: 1473228 kB' 'Active(anon): 131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122656 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136576 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74316 kB' 'KernelStack: 6480 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:28.374 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.375 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906376 kB' 'MemAvailable: 9493848 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451400 kB' 'Inactive: 1473228 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122480 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136588 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74328 kB' 'KernelStack: 6496 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.376 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.377 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906376 kB' 'MemAvailable: 9493852 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451188 kB' 'Inactive: 1473232 kB' 'Active(anon): 131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122236 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136588 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74328 kB' 'KernelStack: 6496 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.378 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.379 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:13:28.380 nr_hugepages=1024 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:28.380 resv_hugepages=0 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:28.380 surplus_hugepages=0 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:28.380 anon_hugepages=0 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906376 kB' 'MemAvailable: 9493852 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451384 kB' 'Inactive: 1473232 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122436 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136576 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74316 kB' 'KernelStack: 6448 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.380 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.381 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:28.382 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906376 kB' 'MemUsed: 4335596 kB' 'SwapCached: 0 kB' 'Active: 451304 kB' 'Inactive: 1473232 kB' 'Active(anon): 131252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1803748 kB' 'Mapped: 48696 kB' 'AnonPages: 122396 kB' 'Shmem: 10464 kB' 'KernelStack: 6500 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62260 kB' 'Slab: 136576 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.641 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.642 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:28.643 node0=1024 expecting 1024 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:28.643 00:13:28.643 real 0m1.122s 00:13:28.643 user 0m0.498s 00:13:28.643 sys 0m0.570s 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.643 16:51:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:13:28.643 ************************************ 00:13:28.643 END TEST default_setup 00:13:28.643 ************************************ 00:13:28.643 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:13:28.643 16:51:30 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:13:28.643 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:28.643 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.643 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:28.643 ************************************ 00:13:28.643 START TEST per_node_1G_alloc 00:13:28.643 ************************************ 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:28.643 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:28.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.903 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:28.903 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:28.903 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8955544 kB' 'MemAvailable: 10543020 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451716 kB' 'Inactive: 1473232 kB' 'Active(anon): 131664 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136796 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74536 kB' 'KernelStack: 6488 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.904 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:28.905 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.168 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8955544 kB' 'MemAvailable: 10543020 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451388 kB' 'Inactive: 1473232 kB' 'Active(anon): 131336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136792 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74532 kB' 'KernelStack: 6480 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.169 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.170 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8955892 kB' 'MemAvailable: 10543368 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451412 kB' 'Inactive: 1473232 kB' 'Active(anon): 131360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122484 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136796 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74536 kB' 'KernelStack: 6464 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.171 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:29.172 nr_hugepages=512 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:13:29.172 resv_hugepages=0 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:29.172 surplus_hugepages=0 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:29.172 anon_hugepages=0 00:13:29.172 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8956204 kB' 'MemAvailable: 10543680 kB' 'Buffers: 2436 kB' 'Cached: 1801312 kB' 'SwapCached: 0 kB' 'Active: 451432 kB' 'Inactive: 1473232 kB' 'Active(anon): 131380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136796 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74536 kB' 'KernelStack: 6448 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.173 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.174 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8956204 kB' 'MemUsed: 3285768 kB' 'SwapCached: 0 kB' 'Active: 451404 kB' 'Inactive: 1473232 kB' 'Active(anon): 131352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1803748 kB' 'Mapped: 48696 kB' 'AnonPages: 122460 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62260 kB' 'Slab: 136796 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.175 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:29.176 node0=512 expecting 512 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:29.176 00:13:29.176 real 0m0.575s 00:13:29.176 user 0m0.263s 00:13:29.176 sys 0m0.348s 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.176 ************************************ 00:13:29.176 16:51:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:29.176 END TEST per_node_1G_alloc 00:13:29.176 ************************************ 00:13:29.176 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:13:29.176 16:51:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:13:29.176 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:29.176 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.176 16:51:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:29.176 ************************************ 00:13:29.176 START TEST even_2G_alloc 00:13:29.176 ************************************ 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:29.176 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:29.177 16:51:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:29.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:29.482 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:29.482 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.745 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908896 kB' 'MemAvailable: 9496376 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451676 kB' 'Inactive: 1473236 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136784 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74524 kB' 'KernelStack: 6500 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.746 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemAvailable: 9496124 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451212 kB' 'Inactive: 1473236 kB' 'Active(anon): 131160 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122516 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136788 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74528 kB' 'KernelStack: 6480 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.747 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.748 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemAvailable: 9496124 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451176 kB' 'Inactive: 1473236 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122232 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136788 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74528 kB' 'KernelStack: 6480 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.749 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.750 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.751 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:29.752 nr_hugepages=1024 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:29.752 resv_hugepages=0 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:29.752 surplus_hugepages=0 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:29.752 anon_hugepages=0 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemAvailable: 9496124 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451228 kB' 'Inactive: 1473236 kB' 'Active(anon): 131176 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122584 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136784 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74524 kB' 'KernelStack: 6496 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.752 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.753 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemUsed: 4333328 kB' 'SwapCached: 0 kB' 'Active: 451400 kB' 'Inactive: 1473236 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1803752 kB' 'Mapped: 48696 kB' 'AnonPages: 122460 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62260 kB' 'Slab: 136780 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.754 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:29.755 node0=1024 expecting 1024 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:29.755 00:13:29.755 real 0m0.587s 00:13:29.755 user 0m0.285s 00:13:29.755 sys 0m0.342s 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.755 16:51:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:29.755 ************************************ 00:13:29.755 END TEST even_2G_alloc 00:13:29.755 ************************************ 00:13:29.755 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:13:29.756 16:51:31 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:13:29.756 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:29.756 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.756 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:29.756 ************************************ 00:13:29.756 START TEST odd_alloc 00:13:29.756 ************************************ 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:29.756 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:30.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:30.326 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:30.326 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906124 kB' 'MemAvailable: 9493604 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451936 kB' 'Inactive: 1473236 kB' 'Active(anon): 131884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 49032 kB' 'Shmem: 10464 kB' 'KReclaimable: 62260 kB' 'Slab: 136768 kB' 'SReclaimable: 62260 kB' 'SUnreclaim: 74508 kB' 'KernelStack: 6532 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.326 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.327 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906124 kB' 'MemAvailable: 9493596 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451344 kB' 'Inactive: 1473236 kB' 'Active(anon): 131292 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136760 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74512 kB' 'KernelStack: 6480 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.328 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.329 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906124 kB' 'MemAvailable: 9493596 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451372 kB' 'Inactive: 1473236 kB' 'Active(anon): 131320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122728 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136760 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74512 kB' 'KernelStack: 6480 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.330 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.331 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:30.332 nr_hugepages=1025 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:13:30.332 resv_hugepages=0 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:30.332 surplus_hugepages=0 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:30.332 anon_hugepages=0 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906212 kB' 'MemAvailable: 9493684 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 451124 kB' 'Inactive: 1473236 kB' 'Active(anon): 131072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122488 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136760 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74512 kB' 'KernelStack: 6480 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.332 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.333 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906212 kB' 'MemUsed: 4335760 kB' 'SwapCached: 0 kB' 'Active: 451176 kB' 'Inactive: 1473236 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1803752 kB' 'Mapped: 48696 kB' 'AnonPages: 122280 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62248 kB' 'Slab: 136760 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.334 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.335 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.593 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:30.594 node0=1025 expecting 1025 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:13:30.594 00:13:30.594 real 0m0.622s 00:13:30.594 user 0m0.322s 00:13:30.594 sys 0m0.333s 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.594 16:51:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:30.594 ************************************ 00:13:30.594 END TEST odd_alloc 00:13:30.594 ************************************ 00:13:30.594 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:13:30.594 16:51:31 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:13:30.594 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:30.594 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.594 16:51:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:30.594 ************************************ 00:13:30.594 START TEST custom_alloc 00:13:30.594 ************************************ 00:13:30.594 16:51:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:13:30.594 16:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:13:30.594 16:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:30.594 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:30.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:30.853 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:30.853 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8966188 kB' 'MemAvailable: 10553664 kB' 'Buffers: 2436 kB' 'Cached: 1801320 kB' 'SwapCached: 0 kB' 'Active: 452064 kB' 'Inactive: 1473240 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122884 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136664 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74416 kB' 'KernelStack: 6484 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.853 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:30.854 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.116 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8966188 kB' 'MemAvailable: 10553664 kB' 'Buffers: 2436 kB' 'Cached: 1801320 kB' 'SwapCached: 0 kB' 'Active: 451660 kB' 'Inactive: 1473240 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136672 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6460 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:31.117 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.118 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.119 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8966188 kB' 'MemAvailable: 10553664 kB' 'Buffers: 2436 kB' 'Cached: 1801320 kB' 'SwapCached: 0 kB' 'Active: 451600 kB' 'Inactive: 1473240 kB' 'Active(anon): 131548 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122732 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136672 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6444 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.120 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:13:31.121 nr_hugepages=512 00:13:31.121 resv_hugepages=0 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:31.121 surplus_hugepages=0 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:31.121 anon_hugepages=0 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:31.121 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8966188 kB' 'MemAvailable: 10553664 kB' 'Buffers: 2436 kB' 'Cached: 1801320 kB' 'SwapCached: 0 kB' 'Active: 451632 kB' 'Inactive: 1473240 kB' 'Active(anon): 131580 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122712 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136672 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6412 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.122 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.123 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8965936 kB' 'MemUsed: 3276036 kB' 'SwapCached: 0 kB' 'Active: 451492 kB' 'Inactive: 1473240 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1803756 kB' 'Mapped: 48700 kB' 'AnonPages: 122572 kB' 'Shmem: 10464 kB' 'KernelStack: 6396 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62248 kB' 'Slab: 136672 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.124 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:31.125 node0=512 expecting 512 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:31.125 00:13:31.125 real 0m0.606s 00:13:31.125 user 0m0.293s 00:13:31.125 sys 0m0.355s 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:31.125 ************************************ 00:13:31.125 END TEST custom_alloc 00:13:31.125 ************************************ 00:13:31.125 16:51:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:31.125 16:51:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:13:31.125 16:51:32 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:13:31.125 16:51:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:31.125 16:51:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.125 16:51:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:31.125 ************************************ 00:13:31.125 START TEST no_shrink_alloc 00:13:31.125 ************************************ 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:31.125 16:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:31.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:31.697 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:31.697 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918540 kB' 'MemAvailable: 9506012 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 447440 kB' 'Inactive: 1473236 kB' 'Active(anon): 127388 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118500 kB' 'Mapped: 48084 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136524 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74276 kB' 'KernelStack: 6400 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.697 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.698 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918540 kB' 'MemAvailable: 9506012 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 447220 kB' 'Inactive: 1473236 kB' 'Active(anon): 127168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118276 kB' 'Mapped: 47956 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136520 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74272 kB' 'KernelStack: 6400 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.699 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.700 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918568 kB' 'MemAvailable: 9506040 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 446932 kB' 'Inactive: 1473236 kB' 'Active(anon): 126880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117992 kB' 'Mapped: 47956 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136520 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74272 kB' 'KernelStack: 6368 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.701 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.702 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:31.703 nr_hugepages=1024 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:31.703 resv_hugepages=0 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:31.703 surplus_hugepages=0 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:31.703 anon_hugepages=0 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918568 kB' 'MemAvailable: 9506040 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 446932 kB' 'Inactive: 1473236 kB' 'Active(anon): 126880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118252 kB' 'Mapped: 47956 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136520 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74272 kB' 'KernelStack: 6368 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.703 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.704 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918984 kB' 'MemUsed: 4322988 kB' 'SwapCached: 0 kB' 'Active: 446872 kB' 'Inactive: 1473236 kB' 'Active(anon): 126820 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1803752 kB' 'Mapped: 47956 kB' 'AnonPages: 118212 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62248 kB' 'Slab: 136520 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.705 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.706 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:31.707 node0=1024 expecting 1024 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:31.707 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:32.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:32.278 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:32.278 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:32.278 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915472 kB' 'MemAvailable: 9502944 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 447752 kB' 'Inactive: 1473236 kB' 'Active(anon): 127700 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118852 kB' 'Mapped: 48128 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136512 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74264 kB' 'KernelStack: 6472 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.278 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.279 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:32.280 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915508 kB' 'MemAvailable: 9502980 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 447188 kB' 'Inactive: 1473236 kB' 'Active(anon): 127136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118248 kB' 'Mapped: 47956 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136512 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74264 kB' 'KernelStack: 6384 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.281 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:32.282 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915508 kB' 'MemAvailable: 9502980 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 446984 kB' 'Inactive: 1473236 kB' 'Active(anon): 126932 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118060 kB' 'Mapped: 47956 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136512 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74264 kB' 'KernelStack: 6400 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.283 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.284 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:32.285 nr_hugepages=1024 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:32.285 resv_hugepages=0 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:32.285 surplus_hugepages=0 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:32.285 anon_hugepages=0 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915508 kB' 'MemAvailable: 9502980 kB' 'Buffers: 2436 kB' 'Cached: 1801316 kB' 'SwapCached: 0 kB' 'Active: 446980 kB' 'Inactive: 1473236 kB' 'Active(anon): 126928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118316 kB' 'Mapped: 47956 kB' 'Shmem: 10464 kB' 'KReclaimable: 62248 kB' 'Slab: 136512 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74264 kB' 'KernelStack: 6400 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.285 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.286 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915508 kB' 'MemUsed: 4326464 kB' 'SwapCached: 0 kB' 'Active: 447596 kB' 'Inactive: 1473236 kB' 'Active(anon): 127544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1473236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 1803752 kB' 'Mapped: 47956 kB' 'AnonPages: 118700 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62248 kB' 'Slab: 136512 kB' 'SReclaimable: 62248 kB' 'SUnreclaim: 74264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.287 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:32.288 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:32.289 node0=1024 expecting 1024 00:13:32.289 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:32.289 16:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:32.289 00:13:32.289 real 0m1.149s 00:13:32.289 user 0m0.565s 00:13:32.289 sys 0m0.668s 00:13:32.289 16:51:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.289 16:51:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:32.289 ************************************ 00:13:32.289 END TEST no_shrink_alloc 00:13:32.289 ************************************ 00:13:32.289 16:51:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:32.289 16:51:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:32.289 ************************************ 00:13:32.289 END TEST hugepages 00:13:32.289 ************************************ 00:13:32.289 00:13:32.289 real 0m5.117s 00:13:32.289 user 0m2.402s 00:13:32.289 sys 0m2.900s 00:13:32.289 16:51:33 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.289 16:51:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:32.668 16:51:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:13:32.668 16:51:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:13:32.668 16:51:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:32.668 16:51:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.668 16:51:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:32.668 ************************************ 00:13:32.668 START TEST driver 00:13:32.668 ************************************ 00:13:32.668 16:51:33 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:13:32.668 * Looking for test storage... 00:13:32.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:32.668 16:51:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:13:32.668 16:51:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:32.668 16:51:33 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:33.235 16:51:34 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:13:33.235 16:51:34 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:33.235 16:51:34 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.235 16:51:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:13:33.235 ************************************ 00:13:33.235 START TEST guess_driver 00:13:33.235 ************************************ 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:13:33.235 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:13:33.235 Looking for driver=uio_pci_generic 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:13:33.235 16:51:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:33.802 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:13:33.802 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:13:33.802 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:34.060 16:51:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:34.626 00:13:34.626 real 0m1.547s 00:13:34.626 user 0m0.525s 00:13:34.626 sys 0m1.045s 00:13:34.626 16:51:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.626 16:51:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:13:34.626 ************************************ 00:13:34.626 END TEST guess_driver 00:13:34.626 ************************************ 00:13:34.626 16:51:36 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:13:34.626 00:13:34.626 real 0m2.331s 00:13:34.626 user 0m0.786s 00:13:34.626 sys 0m1.641s 00:13:34.626 16:51:36 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.626 16:51:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:13:34.626 ************************************ 00:13:34.626 END TEST driver 00:13:34.626 ************************************ 00:13:34.884 16:51:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:13:34.884 16:51:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:13:34.884 16:51:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:34.884 16:51:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.884 16:51:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:34.884 ************************************ 00:13:34.884 START TEST devices 00:13:34.884 ************************************ 00:13:34.884 16:51:36 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:13:34.884 * Looking for test storage... 00:13:34.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:34.884 16:51:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:13:34.884 16:51:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:13:34.884 16:51:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:34.884 16:51:36 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:35.862 16:51:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:13:35.862 No valid GPT data, bailing 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:13:35.862 16:51:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:35.862 16:51:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:35.862 16:51:37 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:13:35.862 No valid GPT data, bailing 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:13:35.862 16:51:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:13:35.862 16:51:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:13:35.862 16:51:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:13:35.862 16:51:37 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:35.862 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:13:35.863 No valid GPT data, bailing 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:13:35.863 16:51:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:13:35.863 16:51:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:13:35.863 16:51:37 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:13:35.863 No valid GPT data, bailing 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:13:35.863 16:51:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:13:35.863 16:51:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:13:35.863 16:51:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:13:35.863 16:51:37 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:13:35.863 16:51:37 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:13:35.863 16:51:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:35.863 16:51:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.863 16:51:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:13:35.863 ************************************ 00:13:35.863 START TEST nvme_mount 00:13:35.863 ************************************ 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:35.863 16:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:13:37.234 Creating new GPT entries in memory. 00:13:37.234 GPT data structures destroyed! You may now partition the disk using fdisk or 00:13:37.234 other utilities. 00:13:37.234 16:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:13:37.234 16:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:37.234 16:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:37.234 16:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:37.234 16:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:13:38.166 Creating new GPT entries in memory. 00:13:38.166 The operation has completed successfully. 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57800 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.166 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.425 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.425 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.425 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.425 16:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:13:38.425 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:38.425 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:38.684 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:38.941 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:38.941 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:38.941 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:38.941 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.941 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.198 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.198 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.198 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.198 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:39.456 16:51:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:39.713 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.713 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:13:39.713 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:13:39.713 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.713 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.713 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:40.029 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:40.029 00:13:40.029 real 0m4.097s 00:13:40.029 user 0m0.749s 00:13:40.029 sys 0m1.131s 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.029 ************************************ 00:13:40.029 END TEST nvme_mount 00:13:40.029 ************************************ 00:13:40.029 16:51:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:13:40.029 16:51:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:13:40.029 16:51:41 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:13:40.029 16:51:41 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:40.029 16:51:41 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.029 16:51:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:13:40.029 ************************************ 00:13:40.029 START TEST dm_mount 00:13:40.029 ************************************ 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:40.029 16:51:41 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:13:40.978 Creating new GPT entries in memory. 00:13:40.978 GPT data structures destroyed! You may now partition the disk using fdisk or 00:13:40.978 other utilities. 00:13:40.978 16:51:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:13:40.978 16:51:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:40.978 16:51:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:40.978 16:51:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:40.978 16:51:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:13:42.352 Creating new GPT entries in memory. 00:13:42.352 The operation has completed successfully. 00:13:42.352 16:51:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:13:42.352 16:51:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:42.352 16:51:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:42.352 16:51:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:42.352 16:51:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:13:43.287 The operation has completed successfully. 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58236 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:43.287 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:43.288 16:51:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:43.547 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:43.547 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:13:43.547 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:13:43.547 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:43.547 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:43.547 16:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:43.547 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:43.547 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:43.805 16:51:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:44.063 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:13:44.322 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:13:44.322 00:13:44.322 real 0m4.301s 00:13:44.322 user 0m0.521s 00:13:44.322 sys 0m0.786s 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.322 16:51:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:13:44.322 ************************************ 00:13:44.322 END TEST dm_mount 00:13:44.322 ************************************ 00:13:44.322 16:51:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:13:44.322 16:51:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:13:44.322 16:51:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:13:44.322 16:51:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:44.322 16:51:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:44.322 16:51:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:13:44.322 16:51:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:44.322 16:51:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:44.580 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:44.580 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:44.580 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:44.580 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:44.580 16:51:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:13:44.580 16:51:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:44.838 16:51:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:13:44.838 16:51:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:44.838 16:51:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:13:44.838 16:51:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:13:44.838 16:51:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:13:44.838 00:13:44.838 real 0m9.931s 00:13:44.838 user 0m1.905s 00:13:44.838 sys 0m2.549s 00:13:44.838 16:51:46 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.838 16:51:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:13:44.838 ************************************ 00:13:44.838 END TEST devices 00:13:44.838 ************************************ 00:13:44.838 16:51:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:13:44.838 00:13:44.838 real 0m23.068s 00:13:44.838 user 0m7.468s 00:13:44.838 sys 0m10.377s 00:13:44.838 16:51:46 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.838 ************************************ 00:13:44.838 END TEST setup.sh 00:13:44.838 16:51:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:44.838 ************************************ 00:13:44.838 16:51:46 -- common/autotest_common.sh@1142 -- # return 0 00:13:44.838 16:51:46 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:45.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:45.403 Hugepages 00:13:45.403 node hugesize free / total 00:13:45.403 node0 1048576kB 0 / 0 00:13:45.403 node0 2048kB 2048 / 2048 00:13:45.403 00:13:45.403 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:45.660 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:45.661 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:13:45.661 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:13:45.661 16:51:47 -- spdk/autotest.sh@130 -- # uname -s 00:13:45.661 16:51:47 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:13:45.661 16:51:47 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:13:45.661 16:51:47 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:46.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:46.592 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.592 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.592 16:51:48 -- common/autotest_common.sh@1532 -- # sleep 1 00:13:47.966 16:51:49 -- common/autotest_common.sh@1533 -- # bdfs=() 00:13:47.966 16:51:49 -- common/autotest_common.sh@1533 -- # local bdfs 00:13:47.966 16:51:49 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:13:47.966 16:51:49 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:13:47.966 16:51:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:47.966 16:51:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:13:47.966 16:51:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:47.966 16:51:49 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:47.966 16:51:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:47.966 16:51:49 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:13:47.966 16:51:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:47.966 16:51:49 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:48.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:48.224 Waiting for block devices as requested 00:13:48.224 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.224 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.482 16:51:49 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:13:48.482 16:51:49 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:13:48.482 16:51:49 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # grep oacs 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:13:48.482 16:51:49 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:13:48.482 16:51:49 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:13:48.482 16:51:49 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1557 -- # continue 00:13:48.482 16:51:49 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:13:48.482 16:51:49 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:13:48.482 16:51:49 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:13:48.482 16:51:49 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # grep oacs 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:13:48.482 16:51:49 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:13:48.482 16:51:49 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:13:48.482 16:51:49 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:13:48.482 16:51:49 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:13:48.482 16:51:49 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:13:48.482 16:51:49 -- common/autotest_common.sh@1557 -- # continue 00:13:48.482 16:51:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:13:48.482 16:51:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:48.482 16:51:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.482 16:51:50 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:13:48.482 16:51:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:48.482 16:51:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.482 16:51:50 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:49.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.414 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.414 16:51:50 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:13:49.414 16:51:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.414 16:51:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.414 16:51:51 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:13:49.414 16:51:51 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:13:49.672 16:51:51 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:13:49.672 16:51:51 -- common/autotest_common.sh@1577 -- # bdfs=() 00:13:49.672 16:51:51 -- common/autotest_common.sh@1577 -- # local bdfs 00:13:49.672 16:51:51 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:13:49.672 16:51:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:49.672 16:51:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:13:49.672 16:51:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:49.672 16:51:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:49.672 16:51:51 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:49.672 16:51:51 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:13:49.672 16:51:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:49.672 16:51:51 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:13:49.672 16:51:51 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:13:49.672 16:51:51 -- common/autotest_common.sh@1580 -- # device=0x0010 00:13:49.672 16:51:51 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:49.672 16:51:51 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:13:49.672 16:51:51 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:13:49.672 16:51:51 -- common/autotest_common.sh@1580 -- # device=0x0010 00:13:49.672 16:51:51 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:49.672 16:51:51 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:13:49.672 16:51:51 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:13:49.672 16:51:51 -- common/autotest_common.sh@1593 -- # return 0 00:13:49.672 16:51:51 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:13:49.672 16:51:51 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:13:49.672 16:51:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:49.672 16:51:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:49.672 16:51:51 -- spdk/autotest.sh@162 -- # timing_enter lib 00:13:49.672 16:51:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.672 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:13:49.672 16:51:51 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:13:49.672 16:51:51 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:13:49.672 16:51:51 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:13:49.672 16:51:51 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:49.672 16:51:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:49.672 16:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.672 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:13:49.672 ************************************ 00:13:49.672 START TEST env 00:13:49.672 ************************************ 00:13:49.672 16:51:51 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:49.672 * Looking for test storage... 00:13:49.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:13:49.672 16:51:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:49.672 16:51:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:49.672 16:51:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.672 16:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:13:49.672 ************************************ 00:13:49.672 START TEST env_memory 00:13:49.672 ************************************ 00:13:49.672 16:51:51 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:49.672 00:13:49.672 00:13:49.672 CUnit - A unit testing framework for C - Version 2.1-3 00:13:49.672 http://cunit.sourceforge.net/ 00:13:49.672 00:13:49.672 00:13:49.672 Suite: memory 00:13:49.930 Test: alloc and free memory map ...[2024-07-22 16:51:51.310661] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:13:49.930 passed 00:13:49.930 Test: mem map translation ...[2024-07-22 16:51:51.382674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:13:49.930 [2024-07-22 16:51:51.382989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:13:49.930 [2024-07-22 16:51:51.383354] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:13:49.930 [2024-07-22 16:51:51.383540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:13:49.930 passed 00:13:49.930 Test: mem map registration ...[2024-07-22 16:51:51.499456] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:13:49.930 [2024-07-22 16:51:51.499579] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:13:49.930 passed 00:13:50.187 Test: mem map adjacent registrations ...passed 00:13:50.187 00:13:50.187 Run Summary: Type Total Ran Passed Failed Inactive 00:13:50.187 suites 1 1 n/a 0 0 00:13:50.187 tests 4 4 4 0 0 00:13:50.187 asserts 152 152 152 0 n/a 00:13:50.187 00:13:50.187 Elapsed time = 0.394 seconds 00:13:50.187 ************************************ 00:13:50.187 END TEST env_memory 00:13:50.187 ************************************ 00:13:50.187 00:13:50.187 real 0m0.442s 00:13:50.187 user 0m0.403s 00:13:50.187 sys 0m0.031s 00:13:50.187 16:51:51 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.187 16:51:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 16:51:51 env -- common/autotest_common.sh@1142 -- # return 0 00:13:50.187 16:51:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:50.187 16:51:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:50.187 16:51:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.187 16:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 ************************************ 00:13:50.187 START TEST env_vtophys 00:13:50.187 ************************************ 00:13:50.187 16:51:51 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:50.187 EAL: lib.eal log level changed from notice to debug 00:13:50.187 EAL: Detected lcore 0 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 1 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 2 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 3 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 4 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 5 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 6 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 7 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 8 as core 0 on socket 0 00:13:50.187 EAL: Detected lcore 9 as core 0 on socket 0 00:13:50.187 EAL: Maximum logical cores by configuration: 128 00:13:50.187 EAL: Detected CPU lcores: 10 00:13:50.187 EAL: Detected NUMA nodes: 1 00:13:50.187 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:13:50.187 EAL: Detected shared linkage of DPDK 00:13:50.446 EAL: No shared files mode enabled, IPC will be disabled 00:13:50.446 EAL: Selected IOVA mode 'PA' 00:13:50.446 EAL: Probing VFIO support... 00:13:50.446 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:50.446 EAL: VFIO modules not loaded, skipping VFIO support... 00:13:50.446 EAL: Ask a virtual area of 0x2e000 bytes 00:13:50.446 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:13:50.446 EAL: Setting up physically contiguous memory... 00:13:50.446 EAL: Setting maximum number of open files to 524288 00:13:50.446 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:13:50.446 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:13:50.446 EAL: Ask a virtual area of 0x61000 bytes 00:13:50.446 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:13:50.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:50.446 EAL: Ask a virtual area of 0x400000000 bytes 00:13:50.446 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:13:50.446 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:13:50.446 EAL: Ask a virtual area of 0x61000 bytes 00:13:50.446 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:13:50.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:50.446 EAL: Ask a virtual area of 0x400000000 bytes 00:13:50.446 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:13:50.446 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:13:50.446 EAL: Ask a virtual area of 0x61000 bytes 00:13:50.446 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:13:50.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:50.446 EAL: Ask a virtual area of 0x400000000 bytes 00:13:50.446 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:13:50.446 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:13:50.446 EAL: Ask a virtual area of 0x61000 bytes 00:13:50.446 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:13:50.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:50.446 EAL: Ask a virtual area of 0x400000000 bytes 00:13:50.446 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:13:50.446 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:13:50.446 EAL: Hugepages will be freed exactly as allocated. 00:13:50.446 EAL: No shared files mode enabled, IPC is disabled 00:13:50.446 EAL: No shared files mode enabled, IPC is disabled 00:13:50.446 EAL: TSC frequency is ~2100000 KHz 00:13:50.446 EAL: Main lcore 0 is ready (tid=7feb516e0a40;cpuset=[0]) 00:13:50.446 EAL: Trying to obtain current memory policy. 00:13:50.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:50.446 EAL: Restoring previous memory policy: 0 00:13:50.446 EAL: request: mp_malloc_sync 00:13:50.446 EAL: No shared files mode enabled, IPC is disabled 00:13:50.446 EAL: Heap on socket 0 was expanded by 2MB 00:13:50.446 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:50.446 EAL: No PCI address specified using 'addr=' in: bus=pci 00:13:50.446 EAL: Mem event callback 'spdk:(nil)' registered 00:13:50.446 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:13:50.446 00:13:50.446 00:13:50.446 CUnit - A unit testing framework for C - Version 2.1-3 00:13:50.446 http://cunit.sourceforge.net/ 00:13:50.446 00:13:50.446 00:13:50.446 Suite: components_suite 00:13:51.010 Test: vtophys_malloc_test ...passed 00:13:51.010 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:13:51.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:51.010 EAL: Restoring previous memory policy: 4 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was expanded by 4MB 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was shrunk by 4MB 00:13:51.010 EAL: Trying to obtain current memory policy. 00:13:51.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:51.010 EAL: Restoring previous memory policy: 4 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was expanded by 6MB 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was shrunk by 6MB 00:13:51.010 EAL: Trying to obtain current memory policy. 00:13:51.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:51.010 EAL: Restoring previous memory policy: 4 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was expanded by 10MB 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was shrunk by 10MB 00:13:51.010 EAL: Trying to obtain current memory policy. 00:13:51.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:51.010 EAL: Restoring previous memory policy: 4 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was expanded by 18MB 00:13:51.010 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.010 EAL: request: mp_malloc_sync 00:13:51.010 EAL: No shared files mode enabled, IPC is disabled 00:13:51.010 EAL: Heap on socket 0 was shrunk by 18MB 00:13:51.010 EAL: Trying to obtain current memory policy. 00:13:51.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:51.010 EAL: Restoring previous memory policy: 4 00:13:51.011 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.011 EAL: request: mp_malloc_sync 00:13:51.011 EAL: No shared files mode enabled, IPC is disabled 00:13:51.011 EAL: Heap on socket 0 was expanded by 34MB 00:13:51.269 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.269 EAL: request: mp_malloc_sync 00:13:51.269 EAL: No shared files mode enabled, IPC is disabled 00:13:51.269 EAL: Heap on socket 0 was shrunk by 34MB 00:13:51.269 EAL: Trying to obtain current memory policy. 00:13:51.269 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:51.269 EAL: Restoring previous memory policy: 4 00:13:51.269 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.269 EAL: request: mp_malloc_sync 00:13:51.269 EAL: No shared files mode enabled, IPC is disabled 00:13:51.269 EAL: Heap on socket 0 was expanded by 66MB 00:13:51.526 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.526 EAL: request: mp_malloc_sync 00:13:51.526 EAL: No shared files mode enabled, IPC is disabled 00:13:51.526 EAL: Heap on socket 0 was shrunk by 66MB 00:13:51.526 EAL: Trying to obtain current memory policy. 00:13:51.526 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:51.526 EAL: Restoring previous memory policy: 4 00:13:51.526 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.526 EAL: request: mp_malloc_sync 00:13:51.526 EAL: No shared files mode enabled, IPC is disabled 00:13:51.526 EAL: Heap on socket 0 was expanded by 130MB 00:13:51.783 EAL: Calling mem event callback 'spdk:(nil)' 00:13:52.041 EAL: request: mp_malloc_sync 00:13:52.041 EAL: No shared files mode enabled, IPC is disabled 00:13:52.041 EAL: Heap on socket 0 was shrunk by 130MB 00:13:52.041 EAL: Trying to obtain current memory policy. 00:13:52.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:52.298 EAL: Restoring previous memory policy: 4 00:13:52.298 EAL: Calling mem event callback 'spdk:(nil)' 00:13:52.298 EAL: request: mp_malloc_sync 00:13:52.298 EAL: No shared files mode enabled, IPC is disabled 00:13:52.298 EAL: Heap on socket 0 was expanded by 258MB 00:13:52.864 EAL: Calling mem event callback 'spdk:(nil)' 00:13:52.864 EAL: request: mp_malloc_sync 00:13:52.864 EAL: No shared files mode enabled, IPC is disabled 00:13:52.864 EAL: Heap on socket 0 was shrunk by 258MB 00:13:53.431 EAL: Trying to obtain current memory policy. 00:13:53.431 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:53.432 EAL: Restoring previous memory policy: 4 00:13:53.432 EAL: Calling mem event callback 'spdk:(nil)' 00:13:53.432 EAL: request: mp_malloc_sync 00:13:53.432 EAL: No shared files mode enabled, IPC is disabled 00:13:53.432 EAL: Heap on socket 0 was expanded by 514MB 00:13:54.834 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.834 EAL: request: mp_malloc_sync 00:13:54.834 EAL: No shared files mode enabled, IPC is disabled 00:13:54.834 EAL: Heap on socket 0 was shrunk by 514MB 00:13:55.770 EAL: Trying to obtain current memory policy. 00:13:55.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:56.028 EAL: Restoring previous memory policy: 4 00:13:56.028 EAL: Calling mem event callback 'spdk:(nil)' 00:13:56.028 EAL: request: mp_malloc_sync 00:13:56.028 EAL: No shared files mode enabled, IPC is disabled 00:13:56.028 EAL: Heap on socket 0 was expanded by 1026MB 00:13:58.553 EAL: Calling mem event callback 'spdk:(nil)' 00:13:58.553 EAL: request: mp_malloc_sync 00:13:58.553 EAL: No shared files mode enabled, IPC is disabled 00:13:58.553 EAL: Heap on socket 0 was shrunk by 1026MB 00:14:00.489 passed 00:14:00.489 00:14:00.489 Run Summary: Type Total Ran Passed Failed Inactive 00:14:00.489 suites 1 1 n/a 0 0 00:14:00.489 tests 2 2 2 0 0 00:14:00.489 asserts 5243 5243 5243 0 n/a 00:14:00.489 00:14:00.489 Elapsed time = 9.971 seconds 00:14:00.489 EAL: Calling mem event callback 'spdk:(nil)' 00:14:00.489 EAL: request: mp_malloc_sync 00:14:00.489 EAL: No shared files mode enabled, IPC is disabled 00:14:00.489 EAL: Heap on socket 0 was shrunk by 2MB 00:14:00.489 EAL: No shared files mode enabled, IPC is disabled 00:14:00.489 EAL: No shared files mode enabled, IPC is disabled 00:14:00.489 EAL: No shared files mode enabled, IPC is disabled 00:14:00.489 00:14:00.489 real 0m10.331s 00:14:00.489 user 0m9.240s 00:14:00.489 sys 0m0.915s 00:14:00.489 ************************************ 00:14:00.489 END TEST env_vtophys 00:14:00.489 ************************************ 00:14:00.489 16:52:02 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.489 16:52:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:14:00.489 16:52:02 env -- common/autotest_common.sh@1142 -- # return 0 00:14:00.489 16:52:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:00.489 16:52:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:00.489 16:52:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.489 16:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:14:00.489 ************************************ 00:14:00.489 START TEST env_pci 00:14:00.489 ************************************ 00:14:00.489 16:52:02 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:00.749 00:14:00.749 00:14:00.749 CUnit - A unit testing framework for C - Version 2.1-3 00:14:00.749 http://cunit.sourceforge.net/ 00:14:00.749 00:14:00.749 00:14:00.749 Suite: pci 00:14:00.749 Test: pci_hook ...[2024-07-22 16:52:02.149306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59536 has claimed it 00:14:00.749 passed 00:14:00.749 00:14:00.749 Run Summary: Type Total Ran Passed Failed Inactive 00:14:00.749 suites 1 1 n/a 0 0 00:14:00.749 tests 1 1 1 0 0 00:14:00.749 asserts 25 25 25 0 n/a 00:14:00.749 00:14:00.749 Elapsed time = 0.012 seconds 00:14:00.749 EAL: Cannot find device (10000:00:01.0) 00:14:00.749 EAL: Failed to attach device on primary process 00:14:00.749 00:14:00.749 real 0m0.108s 00:14:00.749 user 0m0.050s 00:14:00.749 sys 0m0.057s 00:14:00.749 16:52:02 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.749 ************************************ 00:14:00.749 END TEST env_pci 00:14:00.749 ************************************ 00:14:00.749 16:52:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:14:00.749 16:52:02 env -- common/autotest_common.sh@1142 -- # return 0 00:14:00.749 16:52:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:14:00.749 16:52:02 env -- env/env.sh@15 -- # uname 00:14:00.749 16:52:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:14:00.749 16:52:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:14:00.749 16:52:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:00.749 16:52:02 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:00.749 16:52:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.749 16:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:14:00.749 ************************************ 00:14:00.749 START TEST env_dpdk_post_init 00:14:00.749 ************************************ 00:14:00.749 16:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:00.749 EAL: Detected CPU lcores: 10 00:14:00.749 EAL: Detected NUMA nodes: 1 00:14:00.749 EAL: Detected shared linkage of DPDK 00:14:00.749 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:01.006 EAL: Selected IOVA mode 'PA' 00:14:01.006 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:01.006 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:14:01.006 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:14:01.006 Starting DPDK initialization... 00:14:01.006 Starting SPDK post initialization... 00:14:01.006 SPDK NVMe probe 00:14:01.006 Attaching to 0000:00:10.0 00:14:01.006 Attaching to 0000:00:11.0 00:14:01.006 Attached to 0000:00:10.0 00:14:01.006 Attached to 0000:00:11.0 00:14:01.006 Cleaning up... 00:14:01.006 00:14:01.006 real 0m0.282s 00:14:01.006 user 0m0.085s 00:14:01.006 sys 0m0.096s 00:14:01.006 16:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.006 ************************************ 00:14:01.006 END TEST env_dpdk_post_init 00:14:01.006 ************************************ 00:14:01.006 16:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:14:01.006 16:52:02 env -- common/autotest_common.sh@1142 -- # return 0 00:14:01.006 16:52:02 env -- env/env.sh@26 -- # uname 00:14:01.006 16:52:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:14:01.006 16:52:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:01.006 16:52:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:01.006 16:52:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.006 16:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:14:01.006 ************************************ 00:14:01.006 START TEST env_mem_callbacks 00:14:01.006 ************************************ 00:14:01.006 16:52:02 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:01.264 EAL: Detected CPU lcores: 10 00:14:01.264 EAL: Detected NUMA nodes: 1 00:14:01.264 EAL: Detected shared linkage of DPDK 00:14:01.264 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:01.264 EAL: Selected IOVA mode 'PA' 00:14:01.264 00:14:01.264 00:14:01.264 CUnit - A unit testing framework for C - Version 2.1-3 00:14:01.264 http://cunit.sourceforge.net/ 00:14:01.264 00:14:01.264 00:14:01.264 Suite: memory 00:14:01.264 Test: test ... 00:14:01.264 register 0x200000200000 2097152 00:14:01.264 malloc 3145728 00:14:01.264 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:01.264 register 0x200000400000 4194304 00:14:01.264 buf 0x2000004fffc0 len 3145728 PASSED 00:14:01.264 malloc 64 00:14:01.264 buf 0x2000004ffec0 len 64 PASSED 00:14:01.264 malloc 4194304 00:14:01.264 register 0x200000800000 6291456 00:14:01.264 buf 0x2000009fffc0 len 4194304 PASSED 00:14:01.264 free 0x2000004fffc0 3145728 00:14:01.264 free 0x2000004ffec0 64 00:14:01.264 unregister 0x200000400000 4194304 PASSED 00:14:01.264 free 0x2000009fffc0 4194304 00:14:01.264 unregister 0x200000800000 6291456 PASSED 00:14:01.264 malloc 8388608 00:14:01.264 register 0x200000400000 10485760 00:14:01.264 buf 0x2000005fffc0 len 8388608 PASSED 00:14:01.264 free 0x2000005fffc0 8388608 00:14:01.264 unregister 0x200000400000 10485760 PASSED 00:14:01.264 passed 00:14:01.264 00:14:01.264 Run Summary: Type Total Ran Passed Failed Inactive 00:14:01.264 suites 1 1 n/a 0 0 00:14:01.264 tests 1 1 1 0 0 00:14:01.264 asserts 15 15 15 0 n/a 00:14:01.264 00:14:01.264 Elapsed time = 0.085 seconds 00:14:01.534 00:14:01.534 real 0m0.289s 00:14:01.534 user 0m0.117s 00:14:01.534 sys 0m0.070s 00:14:01.534 16:52:02 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.534 ************************************ 00:14:01.534 END TEST env_mem_callbacks 00:14:01.534 ************************************ 00:14:01.534 16:52:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 16:52:02 env -- common/autotest_common.sh@1142 -- # return 0 00:14:01.534 00:14:01.534 real 0m11.809s 00:14:01.534 user 0m10.018s 00:14:01.534 sys 0m1.401s 00:14:01.534 16:52:02 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.534 16:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 ************************************ 00:14:01.534 END TEST env 00:14:01.534 ************************************ 00:14:01.534 16:52:02 -- common/autotest_common.sh@1142 -- # return 0 00:14:01.534 16:52:02 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:01.534 16:52:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:01.534 16:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.534 16:52:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 ************************************ 00:14:01.534 START TEST rpc 00:14:01.534 ************************************ 00:14:01.534 16:52:02 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:01.534 * Looking for test storage... 00:14:01.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:01.534 16:52:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59655 00:14:01.534 16:52:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:01.534 16:52:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:14:01.534 16:52:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59655 00:14:01.534 16:52:03 rpc -- common/autotest_common.sh@829 -- # '[' -z 59655 ']' 00:14:01.534 16:52:03 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.534 16:52:03 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.534 16:52:03 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.534 16:52:03 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.534 16:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.792 [2024-07-22 16:52:03.212632] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:01.792 [2024-07-22 16:52:03.212766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59655 ] 00:14:01.792 [2024-07-22 16:52:03.379087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.051 [2024-07-22 16:52:03.641764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:14:02.051 [2024-07-22 16:52:03.641829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59655' to capture a snapshot of events at runtime. 00:14:02.051 [2024-07-22 16:52:03.641859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.051 [2024-07-22 16:52:03.641872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.051 [2024-07-22 16:52:03.641888] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59655 for offline analysis/debug. 00:14:02.051 [2024-07-22 16:52:03.641942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.617 [2024-07-22 16:52:03.936807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:03.183 16:52:04 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.183 16:52:04 rpc -- common/autotest_common.sh@862 -- # return 0 00:14:03.183 16:52:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:03.183 16:52:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:03.183 16:52:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:14:03.183 16:52:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:14:03.183 16:52:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:03.183 16:52:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.183 16:52:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.183 ************************************ 00:14:03.183 START TEST rpc_integrity 00:14:03.183 ************************************ 00:14:03.183 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:14:03.183 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:03.183 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.183 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.183 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.183 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:03.183 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:03.445 { 00:14:03.445 "name": "Malloc0", 00:14:03.445 "aliases": [ 00:14:03.445 "6d920662-e7d0-4f9e-9edf-5196a5c1f7cf" 00:14:03.445 ], 00:14:03.445 "product_name": "Malloc disk", 00:14:03.445 "block_size": 512, 00:14:03.445 "num_blocks": 16384, 00:14:03.445 "uuid": "6d920662-e7d0-4f9e-9edf-5196a5c1f7cf", 00:14:03.445 "assigned_rate_limits": { 00:14:03.445 "rw_ios_per_sec": 0, 00:14:03.445 "rw_mbytes_per_sec": 0, 00:14:03.445 "r_mbytes_per_sec": 0, 00:14:03.445 "w_mbytes_per_sec": 0 00:14:03.445 }, 00:14:03.445 "claimed": false, 00:14:03.445 "zoned": false, 00:14:03.445 "supported_io_types": { 00:14:03.445 "read": true, 00:14:03.445 "write": true, 00:14:03.445 "unmap": true, 00:14:03.445 "flush": true, 00:14:03.445 "reset": true, 00:14:03.445 "nvme_admin": false, 00:14:03.445 "nvme_io": false, 00:14:03.445 "nvme_io_md": false, 00:14:03.445 "write_zeroes": true, 00:14:03.445 "zcopy": true, 00:14:03.445 "get_zone_info": false, 00:14:03.445 "zone_management": false, 00:14:03.445 "zone_append": false, 00:14:03.445 "compare": false, 00:14:03.445 "compare_and_write": false, 00:14:03.445 "abort": true, 00:14:03.445 "seek_hole": false, 00:14:03.445 "seek_data": false, 00:14:03.445 "copy": true, 00:14:03.445 "nvme_iov_md": false 00:14:03.445 }, 00:14:03.445 "memory_domains": [ 00:14:03.445 { 00:14:03.445 "dma_device_id": "system", 00:14:03.445 "dma_device_type": 1 00:14:03.445 }, 00:14:03.445 { 00:14:03.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.445 "dma_device_type": 2 00:14:03.445 } 00:14:03.445 ], 00:14:03.445 "driver_specific": {} 00:14:03.445 } 00:14:03.445 ]' 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.445 [2024-07-22 16:52:04.906011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:14:03.445 [2024-07-22 16:52:04.906112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.445 [2024-07-22 16:52:04.906162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:03.445 [2024-07-22 16:52:04.906181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.445 [2024-07-22 16:52:04.909058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.445 [2024-07-22 16:52:04.909145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:03.445 Passthru0 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:03.445 { 00:14:03.445 "name": "Malloc0", 00:14:03.445 "aliases": [ 00:14:03.445 "6d920662-e7d0-4f9e-9edf-5196a5c1f7cf" 00:14:03.445 ], 00:14:03.445 "product_name": "Malloc disk", 00:14:03.445 "block_size": 512, 00:14:03.445 "num_blocks": 16384, 00:14:03.445 "uuid": "6d920662-e7d0-4f9e-9edf-5196a5c1f7cf", 00:14:03.445 "assigned_rate_limits": { 00:14:03.445 "rw_ios_per_sec": 0, 00:14:03.445 "rw_mbytes_per_sec": 0, 00:14:03.445 "r_mbytes_per_sec": 0, 00:14:03.445 "w_mbytes_per_sec": 0 00:14:03.445 }, 00:14:03.445 "claimed": true, 00:14:03.445 "claim_type": "exclusive_write", 00:14:03.445 "zoned": false, 00:14:03.445 "supported_io_types": { 00:14:03.445 "read": true, 00:14:03.445 "write": true, 00:14:03.445 "unmap": true, 00:14:03.445 "flush": true, 00:14:03.445 "reset": true, 00:14:03.445 "nvme_admin": false, 00:14:03.445 "nvme_io": false, 00:14:03.445 "nvme_io_md": false, 00:14:03.445 "write_zeroes": true, 00:14:03.445 "zcopy": true, 00:14:03.445 "get_zone_info": false, 00:14:03.445 "zone_management": false, 00:14:03.445 "zone_append": false, 00:14:03.445 "compare": false, 00:14:03.445 "compare_and_write": false, 00:14:03.445 "abort": true, 00:14:03.445 "seek_hole": false, 00:14:03.445 "seek_data": false, 00:14:03.445 "copy": true, 00:14:03.445 "nvme_iov_md": false 00:14:03.445 }, 00:14:03.445 "memory_domains": [ 00:14:03.445 { 00:14:03.445 "dma_device_id": "system", 00:14:03.445 "dma_device_type": 1 00:14:03.445 }, 00:14:03.445 { 00:14:03.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.445 "dma_device_type": 2 00:14:03.445 } 00:14:03.445 ], 00:14:03.445 "driver_specific": {} 00:14:03.445 }, 00:14:03.445 { 00:14:03.445 "name": "Passthru0", 00:14:03.445 "aliases": [ 00:14:03.445 "2a43099a-cbe0-54d0-95f5-96c8d53686e6" 00:14:03.445 ], 00:14:03.445 "product_name": "passthru", 00:14:03.445 "block_size": 512, 00:14:03.445 "num_blocks": 16384, 00:14:03.445 "uuid": "2a43099a-cbe0-54d0-95f5-96c8d53686e6", 00:14:03.445 "assigned_rate_limits": { 00:14:03.445 "rw_ios_per_sec": 0, 00:14:03.445 "rw_mbytes_per_sec": 0, 00:14:03.445 "r_mbytes_per_sec": 0, 00:14:03.445 "w_mbytes_per_sec": 0 00:14:03.445 }, 00:14:03.445 "claimed": false, 00:14:03.445 "zoned": false, 00:14:03.445 "supported_io_types": { 00:14:03.445 "read": true, 00:14:03.445 "write": true, 00:14:03.445 "unmap": true, 00:14:03.445 "flush": true, 00:14:03.445 "reset": true, 00:14:03.445 "nvme_admin": false, 00:14:03.445 "nvme_io": false, 00:14:03.445 "nvme_io_md": false, 00:14:03.445 "write_zeroes": true, 00:14:03.445 "zcopy": true, 00:14:03.445 "get_zone_info": false, 00:14:03.445 "zone_management": false, 00:14:03.445 "zone_append": false, 00:14:03.445 "compare": false, 00:14:03.445 "compare_and_write": false, 00:14:03.445 "abort": true, 00:14:03.445 "seek_hole": false, 00:14:03.445 "seek_data": false, 00:14:03.445 "copy": true, 00:14:03.445 "nvme_iov_md": false 00:14:03.445 }, 00:14:03.445 "memory_domains": [ 00:14:03.445 { 00:14:03.445 "dma_device_id": "system", 00:14:03.445 "dma_device_type": 1 00:14:03.445 }, 00:14:03.445 { 00:14:03.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.445 "dma_device_type": 2 00:14:03.445 } 00:14:03.445 ], 00:14:03.445 "driver_specific": { 00:14:03.445 "passthru": { 00:14:03.445 "name": "Passthru0", 00:14:03.445 "base_bdev_name": "Malloc0" 00:14:03.445 } 00:14:03.445 } 00:14:03.445 } 00:14:03.445 ]' 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:03.445 16:52:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.445 16:52:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.445 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.445 16:52:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:03.445 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.445 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.445 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.445 16:52:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:03.446 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.446 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.446 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.446 16:52:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:03.705 16:52:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:03.705 16:52:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:03.705 00:14:03.705 real 0m0.365s 00:14:03.705 user 0m0.199s 00:14:03.705 sys 0m0.052s 00:14:03.705 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.705 16:52:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 ************************************ 00:14:03.705 END TEST rpc_integrity 00:14:03.705 ************************************ 00:14:03.705 16:52:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:03.705 16:52:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:14:03.705 16:52:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:03.705 16:52:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.705 16:52:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 ************************************ 00:14:03.705 START TEST rpc_plugins 00:14:03.705 ************************************ 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:14:03.705 { 00:14:03.705 "name": "Malloc1", 00:14:03.705 "aliases": [ 00:14:03.705 "d2114165-b61e-4f25-a4b5-05924f90b02e" 00:14:03.705 ], 00:14:03.705 "product_name": "Malloc disk", 00:14:03.705 "block_size": 4096, 00:14:03.705 "num_blocks": 256, 00:14:03.705 "uuid": "d2114165-b61e-4f25-a4b5-05924f90b02e", 00:14:03.705 "assigned_rate_limits": { 00:14:03.705 "rw_ios_per_sec": 0, 00:14:03.705 "rw_mbytes_per_sec": 0, 00:14:03.705 "r_mbytes_per_sec": 0, 00:14:03.705 "w_mbytes_per_sec": 0 00:14:03.705 }, 00:14:03.705 "claimed": false, 00:14:03.705 "zoned": false, 00:14:03.705 "supported_io_types": { 00:14:03.705 "read": true, 00:14:03.705 "write": true, 00:14:03.705 "unmap": true, 00:14:03.705 "flush": true, 00:14:03.705 "reset": true, 00:14:03.705 "nvme_admin": false, 00:14:03.705 "nvme_io": false, 00:14:03.705 "nvme_io_md": false, 00:14:03.705 "write_zeroes": true, 00:14:03.705 "zcopy": true, 00:14:03.705 "get_zone_info": false, 00:14:03.705 "zone_management": false, 00:14:03.705 "zone_append": false, 00:14:03.705 "compare": false, 00:14:03.705 "compare_and_write": false, 00:14:03.705 "abort": true, 00:14:03.705 "seek_hole": false, 00:14:03.705 "seek_data": false, 00:14:03.705 "copy": true, 00:14:03.705 "nvme_iov_md": false 00:14:03.705 }, 00:14:03.705 "memory_domains": [ 00:14:03.705 { 00:14:03.705 "dma_device_id": "system", 00:14:03.705 "dma_device_type": 1 00:14:03.705 }, 00:14:03.705 { 00:14:03.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.705 "dma_device_type": 2 00:14:03.705 } 00:14:03.705 ], 00:14:03.705 "driver_specific": {} 00:14:03.705 } 00:14:03.705 ]' 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:14:03.705 16:52:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:14:03.705 00:14:03.705 real 0m0.156s 00:14:03.705 user 0m0.088s 00:14:03.705 sys 0m0.027s 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.705 ************************************ 00:14:03.705 END TEST rpc_plugins 00:14:03.705 ************************************ 00:14:03.705 16:52:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:03.964 16:52:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:03.964 16:52:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:14:03.964 16:52:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:03.964 16:52:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.964 16:52:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.964 ************************************ 00:14:03.964 START TEST rpc_trace_cmd_test 00:14:03.964 ************************************ 00:14:03.964 16:52:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:14:03.964 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:14:03.964 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:14:03.964 16:52:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.964 16:52:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.964 16:52:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.964 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:14:03.964 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59655", 00:14:03.964 "tpoint_group_mask": "0x8", 00:14:03.964 "iscsi_conn": { 00:14:03.964 "mask": "0x2", 00:14:03.964 "tpoint_mask": "0x0" 00:14:03.964 }, 00:14:03.964 "scsi": { 00:14:03.964 "mask": "0x4", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "bdev": { 00:14:03.965 "mask": "0x8", 00:14:03.965 "tpoint_mask": "0xffffffffffffffff" 00:14:03.965 }, 00:14:03.965 "nvmf_rdma": { 00:14:03.965 "mask": "0x10", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "nvmf_tcp": { 00:14:03.965 "mask": "0x20", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "ftl": { 00:14:03.965 "mask": "0x40", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "blobfs": { 00:14:03.965 "mask": "0x80", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "dsa": { 00:14:03.965 "mask": "0x200", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "thread": { 00:14:03.965 "mask": "0x400", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "nvme_pcie": { 00:14:03.965 "mask": "0x800", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "iaa": { 00:14:03.965 "mask": "0x1000", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "nvme_tcp": { 00:14:03.965 "mask": "0x2000", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "bdev_nvme": { 00:14:03.965 "mask": "0x4000", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 }, 00:14:03.965 "sock": { 00:14:03.965 "mask": "0x8000", 00:14:03.965 "tpoint_mask": "0x0" 00:14:03.965 } 00:14:03.965 }' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:14:03.965 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:14:04.223 16:52:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:14:04.223 00:14:04.223 real 0m0.253s 00:14:04.223 user 0m0.206s 00:14:04.223 sys 0m0.038s 00:14:04.223 ************************************ 00:14:04.223 END TEST rpc_trace_cmd_test 00:14:04.223 ************************************ 00:14:04.223 16:52:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.223 16:52:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.223 16:52:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:04.223 16:52:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:14:04.223 16:52:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:14:04.224 16:52:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:14:04.224 16:52:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:04.224 16:52:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.224 16:52:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.224 ************************************ 00:14:04.224 START TEST rpc_daemon_integrity 00:14:04.224 ************************************ 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:04.224 { 00:14:04.224 "name": "Malloc2", 00:14:04.224 "aliases": [ 00:14:04.224 "c96c1904-6ddc-4eb4-99ca-53f7356da591" 00:14:04.224 ], 00:14:04.224 "product_name": "Malloc disk", 00:14:04.224 "block_size": 512, 00:14:04.224 "num_blocks": 16384, 00:14:04.224 "uuid": "c96c1904-6ddc-4eb4-99ca-53f7356da591", 00:14:04.224 "assigned_rate_limits": { 00:14:04.224 "rw_ios_per_sec": 0, 00:14:04.224 "rw_mbytes_per_sec": 0, 00:14:04.224 "r_mbytes_per_sec": 0, 00:14:04.224 "w_mbytes_per_sec": 0 00:14:04.224 }, 00:14:04.224 "claimed": false, 00:14:04.224 "zoned": false, 00:14:04.224 "supported_io_types": { 00:14:04.224 "read": true, 00:14:04.224 "write": true, 00:14:04.224 "unmap": true, 00:14:04.224 "flush": true, 00:14:04.224 "reset": true, 00:14:04.224 "nvme_admin": false, 00:14:04.224 "nvme_io": false, 00:14:04.224 "nvme_io_md": false, 00:14:04.224 "write_zeroes": true, 00:14:04.224 "zcopy": true, 00:14:04.224 "get_zone_info": false, 00:14:04.224 "zone_management": false, 00:14:04.224 "zone_append": false, 00:14:04.224 "compare": false, 00:14:04.224 "compare_and_write": false, 00:14:04.224 "abort": true, 00:14:04.224 "seek_hole": false, 00:14:04.224 "seek_data": false, 00:14:04.224 "copy": true, 00:14:04.224 "nvme_iov_md": false 00:14:04.224 }, 00:14:04.224 "memory_domains": [ 00:14:04.224 { 00:14:04.224 "dma_device_id": "system", 00:14:04.224 "dma_device_type": 1 00:14:04.224 }, 00:14:04.224 { 00:14:04.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.224 "dma_device_type": 2 00:14:04.224 } 00:14:04.224 ], 00:14:04.224 "driver_specific": {} 00:14:04.224 } 00:14:04.224 ]' 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.224 [2024-07-22 16:52:05.817925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:14:04.224 [2024-07-22 16:52:05.818013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.224 [2024-07-22 16:52:05.818041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:14:04.224 [2024-07-22 16:52:05.818058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.224 [2024-07-22 16:52:05.820903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.224 [2024-07-22 16:52:05.820961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:04.224 Passthru0 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.224 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.482 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.482 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:04.482 { 00:14:04.482 "name": "Malloc2", 00:14:04.482 "aliases": [ 00:14:04.482 "c96c1904-6ddc-4eb4-99ca-53f7356da591" 00:14:04.482 ], 00:14:04.482 "product_name": "Malloc disk", 00:14:04.482 "block_size": 512, 00:14:04.482 "num_blocks": 16384, 00:14:04.482 "uuid": "c96c1904-6ddc-4eb4-99ca-53f7356da591", 00:14:04.482 "assigned_rate_limits": { 00:14:04.482 "rw_ios_per_sec": 0, 00:14:04.482 "rw_mbytes_per_sec": 0, 00:14:04.482 "r_mbytes_per_sec": 0, 00:14:04.482 "w_mbytes_per_sec": 0 00:14:04.482 }, 00:14:04.482 "claimed": true, 00:14:04.482 "claim_type": "exclusive_write", 00:14:04.482 "zoned": false, 00:14:04.482 "supported_io_types": { 00:14:04.482 "read": true, 00:14:04.482 "write": true, 00:14:04.482 "unmap": true, 00:14:04.482 "flush": true, 00:14:04.482 "reset": true, 00:14:04.482 "nvme_admin": false, 00:14:04.482 "nvme_io": false, 00:14:04.482 "nvme_io_md": false, 00:14:04.482 "write_zeroes": true, 00:14:04.482 "zcopy": true, 00:14:04.482 "get_zone_info": false, 00:14:04.482 "zone_management": false, 00:14:04.482 "zone_append": false, 00:14:04.482 "compare": false, 00:14:04.482 "compare_and_write": false, 00:14:04.482 "abort": true, 00:14:04.482 "seek_hole": false, 00:14:04.482 "seek_data": false, 00:14:04.482 "copy": true, 00:14:04.482 "nvme_iov_md": false 00:14:04.482 }, 00:14:04.482 "memory_domains": [ 00:14:04.482 { 00:14:04.482 "dma_device_id": "system", 00:14:04.482 "dma_device_type": 1 00:14:04.482 }, 00:14:04.482 { 00:14:04.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.482 "dma_device_type": 2 00:14:04.482 } 00:14:04.482 ], 00:14:04.482 "driver_specific": {} 00:14:04.482 }, 00:14:04.482 { 00:14:04.482 "name": "Passthru0", 00:14:04.482 "aliases": [ 00:14:04.483 "6267440d-6583-5018-b9bc-29cd9d46e38a" 00:14:04.483 ], 00:14:04.483 "product_name": "passthru", 00:14:04.483 "block_size": 512, 00:14:04.483 "num_blocks": 16384, 00:14:04.483 "uuid": "6267440d-6583-5018-b9bc-29cd9d46e38a", 00:14:04.483 "assigned_rate_limits": { 00:14:04.483 "rw_ios_per_sec": 0, 00:14:04.483 "rw_mbytes_per_sec": 0, 00:14:04.483 "r_mbytes_per_sec": 0, 00:14:04.483 "w_mbytes_per_sec": 0 00:14:04.483 }, 00:14:04.483 "claimed": false, 00:14:04.483 "zoned": false, 00:14:04.483 "supported_io_types": { 00:14:04.483 "read": true, 00:14:04.483 "write": true, 00:14:04.483 "unmap": true, 00:14:04.483 "flush": true, 00:14:04.483 "reset": true, 00:14:04.483 "nvme_admin": false, 00:14:04.483 "nvme_io": false, 00:14:04.483 "nvme_io_md": false, 00:14:04.483 "write_zeroes": true, 00:14:04.483 "zcopy": true, 00:14:04.483 "get_zone_info": false, 00:14:04.483 "zone_management": false, 00:14:04.483 "zone_append": false, 00:14:04.483 "compare": false, 00:14:04.483 "compare_and_write": false, 00:14:04.483 "abort": true, 00:14:04.483 "seek_hole": false, 00:14:04.483 "seek_data": false, 00:14:04.483 "copy": true, 00:14:04.483 "nvme_iov_md": false 00:14:04.483 }, 00:14:04.483 "memory_domains": [ 00:14:04.483 { 00:14:04.483 "dma_device_id": "system", 00:14:04.483 "dma_device_type": 1 00:14:04.483 }, 00:14:04.483 { 00:14:04.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.483 "dma_device_type": 2 00:14:04.483 } 00:14:04.483 ], 00:14:04.483 "driver_specific": { 00:14:04.483 "passthru": { 00:14:04.483 "name": "Passthru0", 00:14:04.483 "base_bdev_name": "Malloc2" 00:14:04.483 } 00:14:04.483 } 00:14:04.483 } 00:14:04.483 ]' 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:04.483 00:14:04.483 real 0m0.326s 00:14:04.483 user 0m0.171s 00:14:04.483 sys 0m0.051s 00:14:04.483 ************************************ 00:14:04.483 END TEST rpc_daemon_integrity 00:14:04.483 ************************************ 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.483 16:52:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:04.483 16:52:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:04.483 16:52:06 rpc -- rpc/rpc.sh@84 -- # killprocess 59655 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@948 -- # '[' -z 59655 ']' 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@952 -- # kill -0 59655 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@953 -- # uname 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59655 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:04.483 killing process with pid 59655 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59655' 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@967 -- # kill 59655 00:14:04.483 16:52:06 rpc -- common/autotest_common.sh@972 -- # wait 59655 00:14:07.769 00:14:07.769 real 0m5.815s 00:14:07.769 user 0m6.452s 00:14:07.769 sys 0m0.892s 00:14:07.769 16:52:08 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.769 16:52:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.769 ************************************ 00:14:07.769 END TEST rpc 00:14:07.769 ************************************ 00:14:07.769 16:52:08 -- common/autotest_common.sh@1142 -- # return 0 00:14:07.769 16:52:08 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:07.769 16:52:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:07.769 16:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.769 16:52:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.769 ************************************ 00:14:07.769 START TEST skip_rpc 00:14:07.769 ************************************ 00:14:07.769 16:52:08 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:07.769 * Looking for test storage... 00:14:07.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:07.769 16:52:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:07.769 16:52:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:07.769 16:52:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:14:07.769 16:52:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:07.769 16:52:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.769 16:52:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.769 ************************************ 00:14:07.769 START TEST skip_rpc 00:14:07.769 ************************************ 00:14:07.769 16:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:14:07.769 16:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59876 00:14:07.769 16:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:07.769 16:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:14:07.769 16:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:14:07.769 [2024-07-22 16:52:09.112446] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:07.769 [2024-07-22 16:52:09.112719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59876 ] 00:14:07.769 [2024-07-22 16:52:09.307639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.028 [2024-07-22 16:52:09.596449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.286 [2024-07-22 16:52:09.890781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59876 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59876 ']' 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59876 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.505 16:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59876 00:14:12.505 16:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.505 killing process with pid 59876 00:14:12.505 16:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.505 16:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59876' 00:14:12.505 16:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59876 00:14:12.505 16:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59876 00:14:15.787 00:14:15.787 real 0m7.859s 00:14:15.787 user 0m7.299s 00:14:15.787 sys 0m0.450s 00:14:15.787 16:52:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.787 16:52:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.787 ************************************ 00:14:15.787 END TEST skip_rpc 00:14:15.787 ************************************ 00:14:15.787 16:52:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:15.787 16:52:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:14:15.787 16:52:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:15.787 16:52:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.787 16:52:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.787 ************************************ 00:14:15.787 START TEST skip_rpc_with_json 00:14:15.787 ************************************ 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59991 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59991 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59991 ']' 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.787 16:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:15.787 [2024-07-22 16:52:17.056764] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:15.787 [2024-07-22 16:52:17.056964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:14:15.787 [2024-07-22 16:52:17.248652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.045 [2024-07-22 16:52:17.582507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.303 [2024-07-22 16:52:17.867372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 [2024-07-22 16:52:18.670801] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:14:17.237 request: 00:14:17.237 { 00:14:17.237 "trtype": "tcp", 00:14:17.237 "method": "nvmf_get_transports", 00:14:17.237 "req_id": 1 00:14:17.237 } 00:14:17.237 Got JSON-RPC error response 00:14:17.237 response: 00:14:17.237 { 00:14:17.237 "code": -19, 00:14:17.237 "message": "No such device" 00:14:17.237 } 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 [2024-07-22 16:52:18.682984] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:17.494 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.494 16:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:17.494 { 00:14:17.494 "subsystems": [ 00:14:17.494 { 00:14:17.494 "subsystem": "vfio_user_target", 00:14:17.494 "config": null 00:14:17.494 }, 00:14:17.494 { 00:14:17.494 "subsystem": "keyring", 00:14:17.494 "config": [] 00:14:17.494 }, 00:14:17.494 { 00:14:17.494 "subsystem": "iobuf", 00:14:17.494 "config": [ 00:14:17.494 { 00:14:17.494 "method": "iobuf_set_options", 00:14:17.494 "params": { 00:14:17.494 "small_pool_count": 8192, 00:14:17.494 "large_pool_count": 1024, 00:14:17.494 "small_bufsize": 8192, 00:14:17.494 "large_bufsize": 135168 00:14:17.494 } 00:14:17.494 } 00:14:17.494 ] 00:14:17.494 }, 00:14:17.494 { 00:14:17.494 "subsystem": "sock", 00:14:17.494 "config": [ 00:14:17.494 { 00:14:17.494 "method": "sock_set_default_impl", 00:14:17.494 "params": { 00:14:17.494 "impl_name": "uring" 00:14:17.494 } 00:14:17.494 }, 00:14:17.494 { 00:14:17.494 "method": "sock_impl_set_options", 00:14:17.494 "params": { 00:14:17.494 "impl_name": "ssl", 00:14:17.495 "recv_buf_size": 4096, 00:14:17.495 "send_buf_size": 4096, 00:14:17.495 "enable_recv_pipe": true, 00:14:17.495 "enable_quickack": false, 00:14:17.495 "enable_placement_id": 0, 00:14:17.495 "enable_zerocopy_send_server": true, 00:14:17.495 "enable_zerocopy_send_client": false, 00:14:17.495 "zerocopy_threshold": 0, 00:14:17.495 "tls_version": 0, 00:14:17.495 "enable_ktls": false 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "sock_impl_set_options", 00:14:17.495 "params": { 00:14:17.495 "impl_name": "posix", 00:14:17.495 "recv_buf_size": 2097152, 00:14:17.495 "send_buf_size": 2097152, 00:14:17.495 "enable_recv_pipe": true, 00:14:17.495 "enable_quickack": false, 00:14:17.495 "enable_placement_id": 0, 00:14:17.495 "enable_zerocopy_send_server": true, 00:14:17.495 "enable_zerocopy_send_client": false, 00:14:17.495 "zerocopy_threshold": 0, 00:14:17.495 "tls_version": 0, 00:14:17.495 "enable_ktls": false 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "sock_impl_set_options", 00:14:17.495 "params": { 00:14:17.495 "impl_name": "uring", 00:14:17.495 "recv_buf_size": 2097152, 00:14:17.495 "send_buf_size": 2097152, 00:14:17.495 "enable_recv_pipe": true, 00:14:17.495 "enable_quickack": false, 00:14:17.495 "enable_placement_id": 0, 00:14:17.495 "enable_zerocopy_send_server": false, 00:14:17.495 "enable_zerocopy_send_client": false, 00:14:17.495 "zerocopy_threshold": 0, 00:14:17.495 "tls_version": 0, 00:14:17.495 "enable_ktls": false 00:14:17.495 } 00:14:17.495 } 00:14:17.495 ] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "vmd", 00:14:17.495 "config": [] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "accel", 00:14:17.495 "config": [ 00:14:17.495 { 00:14:17.495 "method": "accel_set_options", 00:14:17.495 "params": { 00:14:17.495 "small_cache_size": 128, 00:14:17.495 "large_cache_size": 16, 00:14:17.495 "task_count": 2048, 00:14:17.495 "sequence_count": 2048, 00:14:17.495 "buf_count": 2048 00:14:17.495 } 00:14:17.495 } 00:14:17.495 ] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "bdev", 00:14:17.495 "config": [ 00:14:17.495 { 00:14:17.495 "method": "bdev_set_options", 00:14:17.495 "params": { 00:14:17.495 "bdev_io_pool_size": 65535, 00:14:17.495 "bdev_io_cache_size": 256, 00:14:17.495 "bdev_auto_examine": true, 00:14:17.495 "iobuf_small_cache_size": 128, 00:14:17.495 "iobuf_large_cache_size": 16 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "bdev_raid_set_options", 00:14:17.495 "params": { 00:14:17.495 "process_window_size_kb": 1024, 00:14:17.495 "process_max_bandwidth_mb_sec": 0 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "bdev_iscsi_set_options", 00:14:17.495 "params": { 00:14:17.495 "timeout_sec": 30 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "bdev_nvme_set_options", 00:14:17.495 "params": { 00:14:17.495 "action_on_timeout": "none", 00:14:17.495 "timeout_us": 0, 00:14:17.495 "timeout_admin_us": 0, 00:14:17.495 "keep_alive_timeout_ms": 10000, 00:14:17.495 "arbitration_burst": 0, 00:14:17.495 "low_priority_weight": 0, 00:14:17.495 "medium_priority_weight": 0, 00:14:17.495 "high_priority_weight": 0, 00:14:17.495 "nvme_adminq_poll_period_us": 10000, 00:14:17.495 "nvme_ioq_poll_period_us": 0, 00:14:17.495 "io_queue_requests": 0, 00:14:17.495 "delay_cmd_submit": true, 00:14:17.495 "transport_retry_count": 4, 00:14:17.495 "bdev_retry_count": 3, 00:14:17.495 "transport_ack_timeout": 0, 00:14:17.495 "ctrlr_loss_timeout_sec": 0, 00:14:17.495 "reconnect_delay_sec": 0, 00:14:17.495 "fast_io_fail_timeout_sec": 0, 00:14:17.495 "disable_auto_failback": false, 00:14:17.495 "generate_uuids": false, 00:14:17.495 "transport_tos": 0, 00:14:17.495 "nvme_error_stat": false, 00:14:17.495 "rdma_srq_size": 0, 00:14:17.495 "io_path_stat": false, 00:14:17.495 "allow_accel_sequence": false, 00:14:17.495 "rdma_max_cq_size": 0, 00:14:17.495 "rdma_cm_event_timeout_ms": 0, 00:14:17.495 "dhchap_digests": [ 00:14:17.495 "sha256", 00:14:17.495 "sha384", 00:14:17.495 "sha512" 00:14:17.495 ], 00:14:17.495 "dhchap_dhgroups": [ 00:14:17.495 "null", 00:14:17.495 "ffdhe2048", 00:14:17.495 "ffdhe3072", 00:14:17.495 "ffdhe4096", 00:14:17.495 "ffdhe6144", 00:14:17.495 "ffdhe8192" 00:14:17.495 ] 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "bdev_nvme_set_hotplug", 00:14:17.495 "params": { 00:14:17.495 "period_us": 100000, 00:14:17.495 "enable": false 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "bdev_wait_for_examine" 00:14:17.495 } 00:14:17.495 ] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "scsi", 00:14:17.495 "config": null 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "scheduler", 00:14:17.495 "config": [ 00:14:17.495 { 00:14:17.495 "method": "framework_set_scheduler", 00:14:17.495 "params": { 00:14:17.495 "name": "static" 00:14:17.495 } 00:14:17.495 } 00:14:17.495 ] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "vhost_scsi", 00:14:17.495 "config": [] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "vhost_blk", 00:14:17.495 "config": [] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "ublk", 00:14:17.495 "config": [] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "nbd", 00:14:17.495 "config": [] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "nvmf", 00:14:17.495 "config": [ 00:14:17.495 { 00:14:17.495 "method": "nvmf_set_config", 00:14:17.495 "params": { 00:14:17.495 "discovery_filter": "match_any", 00:14:17.495 "admin_cmd_passthru": { 00:14:17.495 "identify_ctrlr": false 00:14:17.495 } 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "nvmf_set_max_subsystems", 00:14:17.495 "params": { 00:14:17.495 "max_subsystems": 1024 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "nvmf_set_crdt", 00:14:17.495 "params": { 00:14:17.495 "crdt1": 0, 00:14:17.495 "crdt2": 0, 00:14:17.495 "crdt3": 0 00:14:17.495 } 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "method": "nvmf_create_transport", 00:14:17.495 "params": { 00:14:17.495 "trtype": "TCP", 00:14:17.495 "max_queue_depth": 128, 00:14:17.495 "max_io_qpairs_per_ctrlr": 127, 00:14:17.495 "in_capsule_data_size": 4096, 00:14:17.495 "max_io_size": 131072, 00:14:17.495 "io_unit_size": 131072, 00:14:17.495 "max_aq_depth": 128, 00:14:17.495 "num_shared_buffers": 511, 00:14:17.495 "buf_cache_size": 4294967295, 00:14:17.495 "dif_insert_or_strip": false, 00:14:17.495 "zcopy": false, 00:14:17.495 "c2h_success": true, 00:14:17.495 "sock_priority": 0, 00:14:17.495 "abort_timeout_sec": 1, 00:14:17.495 "ack_timeout": 0, 00:14:17.495 "data_wr_pool_size": 0 00:14:17.495 } 00:14:17.495 } 00:14:17.495 ] 00:14:17.495 }, 00:14:17.495 { 00:14:17.495 "subsystem": "iscsi", 00:14:17.495 "config": [ 00:14:17.495 { 00:14:17.495 "method": "iscsi_set_options", 00:14:17.495 "params": { 00:14:17.495 "node_base": "iqn.2016-06.io.spdk", 00:14:17.495 "max_sessions": 128, 00:14:17.495 "max_connections_per_session": 2, 00:14:17.495 "max_queue_depth": 64, 00:14:17.495 "default_time2wait": 2, 00:14:17.495 "default_time2retain": 20, 00:14:17.495 "first_burst_length": 8192, 00:14:17.495 "immediate_data": true, 00:14:17.495 "allow_duplicated_isid": false, 00:14:17.495 "error_recovery_level": 0, 00:14:17.495 "nop_timeout": 60, 00:14:17.495 "nop_in_interval": 30, 00:14:17.495 "disable_chap": false, 00:14:17.495 "require_chap": false, 00:14:17.495 "mutual_chap": false, 00:14:17.495 "chap_group": 0, 00:14:17.495 "max_large_datain_per_connection": 64, 00:14:17.495 "max_r2t_per_connection": 4, 00:14:17.495 "pdu_pool_size": 36864, 00:14:17.495 "immediate_data_pool_size": 16384, 00:14:17.495 "data_out_pool_size": 2048 00:14:17.495 } 00:14:17.495 } 00:14:17.495 ] 00:14:17.495 } 00:14:17.495 ] 00:14:17.495 } 00:14:17.495 16:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:17.495 16:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59991 00:14:17.495 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59991 ']' 00:14:17.495 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59991 00:14:17.495 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:14:17.496 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.496 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59991 00:14:17.496 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.496 killing process with pid 59991 00:14:17.496 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.496 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59991' 00:14:17.496 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59991 00:14:17.496 16:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59991 00:14:20.775 16:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60058 00:14:20.775 16:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:20.775 16:52:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60058 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60058 ']' 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60058 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60058 00:14:26.064 killing process with pid 60058 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60058' 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60058 00:14:26.064 16:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60058 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:28.601 00:14:28.601 real 0m12.936s 00:14:28.601 user 0m12.349s 00:14:28.601 sys 0m0.998s 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.601 ************************************ 00:14:28.601 END TEST skip_rpc_with_json 00:14:28.601 ************************************ 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:28.601 16:52:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:28.601 16:52:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:14:28.601 16:52:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:28.601 16:52:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.601 16:52:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.601 ************************************ 00:14:28.601 START TEST skip_rpc_with_delay 00:14:28.601 ************************************ 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:28.601 16:52:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:28.601 [2024-07-22 16:52:30.010194] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:14:28.601 [2024-07-22 16:52:30.010402] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:14:28.601 16:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:14:28.601 16:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.601 16:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.601 16:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.601 00:14:28.601 real 0m0.209s 00:14:28.601 user 0m0.112s 00:14:28.601 sys 0m0.095s 00:14:28.601 ************************************ 00:14:28.601 END TEST skip_rpc_with_delay 00:14:28.601 ************************************ 00:14:28.601 16:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.601 16:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:14:28.601 16:52:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:28.601 16:52:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:14:28.601 16:52:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:14:28.601 16:52:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:14:28.601 16:52:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:28.601 16:52:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.601 16:52:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.601 ************************************ 00:14:28.601 START TEST exit_on_failed_rpc_init 00:14:28.601 ************************************ 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60197 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60197 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 60197 ']' 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.601 16:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:14:28.946 [2024-07-22 16:52:30.243888] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:28.946 [2024-07-22 16:52:30.244036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60197 ] 00:14:28.946 [2024-07-22 16:52:30.458927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.207 [2024-07-22 16:52:30.718502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.470 [2024-07-22 16:52:30.993454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:30.406 16:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:30.406 [2024-07-22 16:52:31.920326] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:30.406 [2024-07-22 16:52:31.920561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60221 ] 00:14:30.664 [2024-07-22 16:52:32.092822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.922 [2024-07-22 16:52:32.368840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.922 [2024-07-22 16:52:32.368968] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:14:30.922 [2024-07-22 16:52:32.368993] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:14:30.922 [2024-07-22 16:52:32.369016] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60197 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 60197 ']' 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 60197 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60197 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60197' 00:14:31.488 killing process with pid 60197 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 60197 00:14:31.488 16:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 60197 00:14:34.771 00:14:34.771 real 0m5.726s 00:14:34.771 user 0m6.494s 00:14:34.771 sys 0m0.621s 00:14:34.771 16:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.771 16:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:14:34.771 ************************************ 00:14:34.771 END TEST exit_on_failed_rpc_init 00:14:34.771 ************************************ 00:14:34.771 16:52:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:34.771 16:52:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:34.771 00:14:34.771 real 0m27.029s 00:14:34.771 user 0m26.348s 00:14:34.772 sys 0m2.371s 00:14:34.772 16:52:35 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.772 16:52:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.772 ************************************ 00:14:34.772 END TEST skip_rpc 00:14:34.772 ************************************ 00:14:34.772 16:52:35 -- common/autotest_common.sh@1142 -- # return 0 00:14:34.772 16:52:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:34.772 16:52:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:34.772 16:52:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.772 16:52:35 -- common/autotest_common.sh@10 -- # set +x 00:14:34.772 ************************************ 00:14:34.772 START TEST rpc_client 00:14:34.772 ************************************ 00:14:34.772 16:52:35 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:34.772 * Looking for test storage... 00:14:34.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:14:34.772 16:52:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:14:34.772 OK 00:14:34.772 16:52:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:14:34.772 00:14:34.772 real 0m0.162s 00:14:34.772 user 0m0.063s 00:14:34.772 sys 0m0.103s 00:14:34.772 16:52:36 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.772 16:52:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:14:34.772 ************************************ 00:14:34.772 END TEST rpc_client 00:14:34.772 ************************************ 00:14:34.772 16:52:36 -- common/autotest_common.sh@1142 -- # return 0 00:14:34.772 16:52:36 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:34.772 16:52:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:34.772 16:52:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.772 16:52:36 -- common/autotest_common.sh@10 -- # set +x 00:14:34.772 ************************************ 00:14:34.772 START TEST json_config 00:14:34.772 ************************************ 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.772 16:52:36 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.772 16:52:36 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.772 16:52:36 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.772 16:52:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.772 16:52:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.772 16:52:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.772 16:52:36 json_config -- paths/export.sh@5 -- # export PATH 00:14:34.772 16:52:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@47 -- # : 0 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.772 16:52:36 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:14:34.772 INFO: JSON configuration test init 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:34.772 Waiting for target to run... 00:14:34.772 16:52:36 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:14:34.772 16:52:36 json_config -- json_config/common.sh@9 -- # local app=target 00:14:34.772 16:52:36 json_config -- json_config/common.sh@10 -- # shift 00:14:34.772 16:52:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:34.772 16:52:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:34.772 16:52:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:14:34.772 16:52:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:34.772 16:52:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:34.772 16:52:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60380 00:14:34.772 16:52:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:14:34.772 16:52:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:34.772 16:52:36 json_config -- json_config/common.sh@25 -- # waitforlisten 60380 /var/tmp/spdk_tgt.sock 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@829 -- # '[' -z 60380 ']' 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:34.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.772 16:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:35.030 [2024-07-22 16:52:36.407928] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:35.030 [2024-07-22 16:52:36.408617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60380 ] 00:14:35.290 [2024-07-22 16:52:36.804035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.549 [2024-07-22 16:52:37.099640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.807 16:52:37 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.807 16:52:37 json_config -- common/autotest_common.sh@862 -- # return 0 00:14:35.807 16:52:37 json_config -- json_config/common.sh@26 -- # echo '' 00:14:35.807 00:14:35.807 16:52:37 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:14:35.807 16:52:37 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:14:35.807 16:52:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.807 16:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:35.807 16:52:37 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:14:35.807 16:52:37 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:14:35.807 16:52:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.807 16:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:35.807 16:52:37 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:14:35.807 16:52:37 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:14:35.807 16:52:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:14:36.374 [2024-07-22 16:52:37.774586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:14:37.310 16:52:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.310 16:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:14:37.310 16:52:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@51 -- # sort 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:14:37.310 16:52:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.310 16:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@59 -- # return 0 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:14:37.310 16:52:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.310 16:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:14:37.310 16:52:38 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:37.310 16:52:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:37.876 MallocForNvmf0 00:14:37.876 16:52:39 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:37.876 16:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:37.876 MallocForNvmf1 00:14:38.135 16:52:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:14:38.135 16:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:14:38.135 [2024-07-22 16:52:39.670683] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.135 16:52:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:38.135 16:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:38.393 16:52:39 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:38.393 16:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:38.652 16:52:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:38.652 16:52:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:38.909 16:52:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:38.909 16:52:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:39.186 [2024-07-22 16:52:40.587437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:39.186 16:52:40 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:14:39.186 16:52:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.186 16:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:39.186 16:52:40 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:14:39.186 16:52:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.186 16:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:39.186 16:52:40 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:14:39.186 16:52:40 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:39.186 16:52:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:39.444 MallocBdevForConfigChangeCheck 00:14:39.444 16:52:40 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:14:39.444 16:52:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.444 16:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:39.444 16:52:41 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:14:39.445 16:52:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:40.011 INFO: shutting down applications... 00:14:40.011 16:52:41 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:14:40.011 16:52:41 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:14:40.011 16:52:41 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:14:40.011 16:52:41 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:14:40.011 16:52:41 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:14:40.270 Calling clear_iscsi_subsystem 00:14:40.270 Calling clear_nvmf_subsystem 00:14:40.270 Calling clear_nbd_subsystem 00:14:40.270 Calling clear_ublk_subsystem 00:14:40.270 Calling clear_vhost_blk_subsystem 00:14:40.270 Calling clear_vhost_scsi_subsystem 00:14:40.270 Calling clear_bdev_subsystem 00:14:40.270 16:52:41 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:14:40.270 16:52:41 json_config -- json_config/json_config.sh@347 -- # count=100 00:14:40.270 16:52:41 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:14:40.270 16:52:41 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:40.270 16:52:41 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:14:40.270 16:52:41 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:14:40.838 16:52:42 json_config -- json_config/json_config.sh@349 -- # break 00:14:40.838 16:52:42 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:14:40.838 16:52:42 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:14:40.838 16:52:42 json_config -- json_config/common.sh@31 -- # local app=target 00:14:40.838 16:52:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:40.838 16:52:42 json_config -- json_config/common.sh@35 -- # [[ -n 60380 ]] 00:14:40.838 16:52:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60380 00:14:40.838 16:52:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:40.838 16:52:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:40.838 16:52:42 json_config -- json_config/common.sh@41 -- # kill -0 60380 00:14:40.838 16:52:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:14:41.461 16:52:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:14:41.461 16:52:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:41.461 16:52:42 json_config -- json_config/common.sh@41 -- # kill -0 60380 00:14:41.461 16:52:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:14:41.719 16:52:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:14:41.719 16:52:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:41.719 16:52:43 json_config -- json_config/common.sh@41 -- # kill -0 60380 00:14:41.719 16:52:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:14:42.284 16:52:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:14:42.284 16:52:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:42.284 16:52:43 json_config -- json_config/common.sh@41 -- # kill -0 60380 00:14:42.284 16:52:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:42.284 16:52:43 json_config -- json_config/common.sh@43 -- # break 00:14:42.284 16:52:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:42.284 SPDK target shutdown done 00:14:42.284 16:52:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:42.284 INFO: relaunching applications... 00:14:42.284 Waiting for target to run... 00:14:42.284 16:52:43 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:14:42.284 16:52:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:42.284 16:52:43 json_config -- json_config/common.sh@9 -- # local app=target 00:14:42.284 16:52:43 json_config -- json_config/common.sh@10 -- # shift 00:14:42.284 16:52:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:42.284 16:52:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:42.284 16:52:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:14:42.284 16:52:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:42.284 16:52:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:42.284 16:52:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60595 00:14:42.284 16:52:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:42.284 16:52:43 json_config -- json_config/common.sh@25 -- # waitforlisten 60595 /var/tmp/spdk_tgt.sock 00:14:42.284 16:52:43 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:42.284 16:52:43 json_config -- common/autotest_common.sh@829 -- # '[' -z 60595 ']' 00:14:42.284 16:52:43 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:42.284 16:52:43 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.284 16:52:43 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:42.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:42.284 16:52:43 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.284 16:52:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:42.542 [2024-07-22 16:52:43.964875] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:42.542 [2024-07-22 16:52:43.965342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60595 ] 00:14:42.800 [2024-07-22 16:52:44.386973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.071 [2024-07-22 16:52:44.624737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.652 [2024-07-22 16:52:44.979962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:44.217 [2024-07-22 16:52:45.802391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.217 [2024-07-22 16:52:45.834561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:44.474 00:14:44.474 INFO: Checking if target configuration is the same... 00:14:44.474 16:52:45 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.474 16:52:45 json_config -- common/autotest_common.sh@862 -- # return 0 00:14:44.474 16:52:45 json_config -- json_config/common.sh@26 -- # echo '' 00:14:44.474 16:52:45 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:14:44.474 16:52:45 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:14:44.474 16:52:45 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:14:44.474 16:52:45 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:44.474 16:52:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:44.474 + '[' 2 -ne 2 ']' 00:14:44.474 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:14:44.474 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:14:44.474 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:44.474 +++ basename /dev/fd/62 00:14:44.474 ++ mktemp /tmp/62.XXX 00:14:44.474 + tmp_file_1=/tmp/62.yeB 00:14:44.474 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:44.474 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:44.474 + tmp_file_2=/tmp/spdk_tgt_config.json.G0G 00:14:44.474 + ret=0 00:14:44.474 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:44.732 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:44.990 + diff -u /tmp/62.yeB /tmp/spdk_tgt_config.json.G0G 00:14:44.990 INFO: JSON config files are the same 00:14:44.990 + echo 'INFO: JSON config files are the same' 00:14:44.990 + rm /tmp/62.yeB /tmp/spdk_tgt_config.json.G0G 00:14:44.990 + exit 0 00:14:44.990 INFO: changing configuration and checking if this can be detected... 00:14:44.990 16:52:46 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:14:44.990 16:52:46 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:14:44.990 16:52:46 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:44.990 16:52:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:44.990 16:52:46 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:44.990 16:52:46 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:14:44.990 16:52:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:44.990 + '[' 2 -ne 2 ']' 00:14:44.990 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:14:44.990 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:14:44.990 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:44.990 +++ basename /dev/fd/62 00:14:44.990 ++ mktemp /tmp/62.XXX 00:14:44.990 + tmp_file_1=/tmp/62.em9 00:14:44.990 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:44.990 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:45.282 + tmp_file_2=/tmp/spdk_tgt_config.json.7de 00:14:45.282 + ret=0 00:14:45.282 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:45.541 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:45.541 + diff -u /tmp/62.em9 /tmp/spdk_tgt_config.json.7de 00:14:45.541 + ret=1 00:14:45.541 + echo '=== Start of file: /tmp/62.em9 ===' 00:14:45.541 + cat /tmp/62.em9 00:14:45.541 + echo '=== End of file: /tmp/62.em9 ===' 00:14:45.541 + echo '' 00:14:45.541 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7de ===' 00:14:45.541 + cat /tmp/spdk_tgt_config.json.7de 00:14:45.541 + echo '=== End of file: /tmp/spdk_tgt_config.json.7de ===' 00:14:45.541 + echo '' 00:14:45.541 + rm /tmp/62.em9 /tmp/spdk_tgt_config.json.7de 00:14:45.541 + exit 1 00:14:45.541 INFO: configuration change detected. 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@321 -- # [[ -n 60595 ]] 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@197 -- # uname -s 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:45.541 16:52:47 json_config -- json_config/json_config.sh@327 -- # killprocess 60595 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 60595 ']' 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@952 -- # kill -0 60595 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@953 -- # uname 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.541 16:52:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60595 00:14:45.799 killing process with pid 60595 00:14:45.799 16:52:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.799 16:52:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.799 16:52:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60595' 00:14:45.799 16:52:47 json_config -- common/autotest_common.sh@967 -- # kill 60595 00:14:45.799 16:52:47 json_config -- common/autotest_common.sh@972 -- # wait 60595 00:14:46.749 16:52:48 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:46.749 16:52:48 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:14:46.749 16:52:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:46.749 16:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:46.749 16:52:48 json_config -- json_config/json_config.sh@332 -- # return 0 00:14:46.749 INFO: Success 00:14:46.749 16:52:48 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:14:46.749 00:14:46.749 real 0m12.175s 00:14:46.749 user 0m15.342s 00:14:46.749 sys 0m2.159s 00:14:46.749 ************************************ 00:14:46.749 END TEST json_config 00:14:46.749 ************************************ 00:14:46.749 16:52:48 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.749 16:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:47.007 16:52:48 -- common/autotest_common.sh@1142 -- # return 0 00:14:47.007 16:52:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:47.007 16:52:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:47.007 16:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.007 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:47.007 ************************************ 00:14:47.007 START TEST json_config_extra_key 00:14:47.007 ************************************ 00:14:47.007 16:52:48 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.007 16:52:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.007 16:52:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.007 16:52:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.007 16:52:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.007 16:52:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.007 16:52:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.007 16:52:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:14:47.007 16:52:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.007 16:52:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:47.007 INFO: launching applications... 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:14:47.007 16:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:47.007 Waiting for target to run... 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60761 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60761 /var/tmp/spdk_tgt.sock 00:14:47.007 16:52:48 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 60761 ']' 00:14:47.007 16:52:48 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:47.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:47.007 16:52:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:47.007 16:52:48 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.007 16:52:48 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:47.007 16:52:48 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.007 16:52:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:47.007 [2024-07-22 16:52:48.581584] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:47.007 [2024-07-22 16:52:48.581726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60761 ] 00:14:47.573 [2024-07-22 16:52:48.972741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.831 [2024-07-22 16:52:49.213916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.088 [2024-07-22 16:52:49.469479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.654 00:14:48.654 INFO: shutting down applications... 00:14:48.654 16:52:50 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.654 16:52:50 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:14:48.654 16:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:14:48.654 16:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60761 ]] 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60761 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:48.654 16:52:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:49.221 16:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:49.221 16:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:49.221 16:52:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:49.221 16:52:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:49.826 16:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:49.826 16:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:49.826 16:52:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:49.826 16:52:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:50.085 16:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:50.085 16:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:50.085 16:52:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:50.085 16:52:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:50.649 16:52:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:50.649 16:52:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:50.649 16:52:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:50.649 16:52:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:51.215 16:52:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:51.215 16:52:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:51.215 16:52:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:51.215 16:52:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:51.781 16:52:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:51.781 16:52:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:51.781 16:52:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:51.781 16:52:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:52.040 16:52:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:52.040 16:52:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:52.040 16:52:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60761 00:14:52.040 SPDK target shutdown done 00:14:52.040 Success 00:14:52.040 16:52:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:52.040 16:52:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:14:52.040 16:52:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:52.040 16:52:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:52.040 16:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:14:52.040 00:14:52.040 real 0m5.268s 00:14:52.040 user 0m4.804s 00:14:52.040 sys 0m0.576s 00:14:52.040 ************************************ 00:14:52.040 END TEST json_config_extra_key 00:14:52.040 ************************************ 00:14:52.040 16:52:53 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.040 16:52:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:52.299 16:52:53 -- common/autotest_common.sh@1142 -- # return 0 00:14:52.299 16:52:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:52.299 16:52:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:52.299 16:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.299 16:52:53 -- common/autotest_common.sh@10 -- # set +x 00:14:52.299 ************************************ 00:14:52.299 START TEST alias_rpc 00:14:52.299 ************************************ 00:14:52.299 16:52:53 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:52.299 * Looking for test storage... 00:14:52.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:14:52.299 16:52:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:52.299 16:52:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60871 00:14:52.299 16:52:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:52.299 16:52:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60871 00:14:52.299 16:52:53 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60871 ']' 00:14:52.299 16:52:53 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.299 16:52:53 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.299 16:52:53 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.299 16:52:53 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.299 16:52:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.299 [2024-07-22 16:52:53.912577] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:52.299 [2024-07-22 16:52:53.912716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60871 ] 00:14:52.557 [2024-07-22 16:52:54.078185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.816 [2024-07-22 16:52:54.330468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.074 [2024-07-22 16:52:54.583693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:54.010 16:52:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:14:54.010 16:52:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60871 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60871 ']' 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60871 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60871 00:14:54.010 killing process with pid 60871 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60871' 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@967 -- # kill 60871 00:14:54.010 16:52:55 alias_rpc -- common/autotest_common.sh@972 -- # wait 60871 00:14:57.295 ************************************ 00:14:57.295 END TEST alias_rpc 00:14:57.295 ************************************ 00:14:57.295 00:14:57.295 real 0m4.740s 00:14:57.295 user 0m4.778s 00:14:57.295 sys 0m0.549s 00:14:57.295 16:52:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.295 16:52:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 16:52:58 -- common/autotest_common.sh@1142 -- # return 0 00:14:57.295 16:52:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:14:57.295 16:52:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:57.295 16:52:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:57.295 16:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.295 16:52:58 -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 ************************************ 00:14:57.295 START TEST spdkcli_tcp 00:14:57.295 ************************************ 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:57.295 * Looking for test storage... 00:14:57.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60981 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:57.295 16:52:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60981 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60981 ']' 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.295 16:52:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 [2024-07-22 16:52:58.721306] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:57.295 [2024-07-22 16:52:58.721447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:14:57.295 [2024-07-22 16:52:58.891115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:57.554 [2024-07-22 16:52:59.169390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.554 [2024-07-22 16:52:59.169410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.118 [2024-07-22 16:52:59.456877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:58.683 16:53:00 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.683 16:53:00 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:14:58.683 16:53:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=61004 00:14:58.683 16:53:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:14:58.683 16:53:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:14:58.946 [ 00:14:58.946 "bdev_malloc_delete", 00:14:58.946 "bdev_malloc_create", 00:14:58.946 "bdev_null_resize", 00:14:58.946 "bdev_null_delete", 00:14:58.946 "bdev_null_create", 00:14:58.946 "bdev_nvme_cuse_unregister", 00:14:58.946 "bdev_nvme_cuse_register", 00:14:58.946 "bdev_opal_new_user", 00:14:58.946 "bdev_opal_set_lock_state", 00:14:58.946 "bdev_opal_delete", 00:14:58.946 "bdev_opal_get_info", 00:14:58.946 "bdev_opal_create", 00:14:58.946 "bdev_nvme_opal_revert", 00:14:58.946 "bdev_nvme_opal_init", 00:14:58.946 "bdev_nvme_send_cmd", 00:14:58.946 "bdev_nvme_get_path_iostat", 00:14:58.946 "bdev_nvme_get_mdns_discovery_info", 00:14:58.946 "bdev_nvme_stop_mdns_discovery", 00:14:58.946 "bdev_nvme_start_mdns_discovery", 00:14:58.946 "bdev_nvme_set_multipath_policy", 00:14:58.946 "bdev_nvme_set_preferred_path", 00:14:58.946 "bdev_nvme_get_io_paths", 00:14:58.946 "bdev_nvme_remove_error_injection", 00:14:58.946 "bdev_nvme_add_error_injection", 00:14:58.946 "bdev_nvme_get_discovery_info", 00:14:58.946 "bdev_nvme_stop_discovery", 00:14:58.946 "bdev_nvme_start_discovery", 00:14:58.946 "bdev_nvme_get_controller_health_info", 00:14:58.946 "bdev_nvme_disable_controller", 00:14:58.946 "bdev_nvme_enable_controller", 00:14:58.946 "bdev_nvme_reset_controller", 00:14:58.946 "bdev_nvme_get_transport_statistics", 00:14:58.946 "bdev_nvme_apply_firmware", 00:14:58.946 "bdev_nvme_detach_controller", 00:14:58.946 "bdev_nvme_get_controllers", 00:14:58.946 "bdev_nvme_attach_controller", 00:14:58.946 "bdev_nvme_set_hotplug", 00:14:58.946 "bdev_nvme_set_options", 00:14:58.946 "bdev_passthru_delete", 00:14:58.946 "bdev_passthru_create", 00:14:58.946 "bdev_lvol_set_parent_bdev", 00:14:58.946 "bdev_lvol_set_parent", 00:14:58.946 "bdev_lvol_check_shallow_copy", 00:14:58.946 "bdev_lvol_start_shallow_copy", 00:14:58.946 "bdev_lvol_grow_lvstore", 00:14:58.946 "bdev_lvol_get_lvols", 00:14:58.946 "bdev_lvol_get_lvstores", 00:14:58.946 "bdev_lvol_delete", 00:14:58.946 "bdev_lvol_set_read_only", 00:14:58.946 "bdev_lvol_resize", 00:14:58.946 "bdev_lvol_decouple_parent", 00:14:58.946 "bdev_lvol_inflate", 00:14:58.946 "bdev_lvol_rename", 00:14:58.946 "bdev_lvol_clone_bdev", 00:14:58.946 "bdev_lvol_clone", 00:14:58.947 "bdev_lvol_snapshot", 00:14:58.947 "bdev_lvol_create", 00:14:58.947 "bdev_lvol_delete_lvstore", 00:14:58.947 "bdev_lvol_rename_lvstore", 00:14:58.947 "bdev_lvol_create_lvstore", 00:14:58.947 "bdev_raid_set_options", 00:14:58.947 "bdev_raid_remove_base_bdev", 00:14:58.947 "bdev_raid_add_base_bdev", 00:14:58.947 "bdev_raid_delete", 00:14:58.947 "bdev_raid_create", 00:14:58.947 "bdev_raid_get_bdevs", 00:14:58.947 "bdev_error_inject_error", 00:14:58.947 "bdev_error_delete", 00:14:58.947 "bdev_error_create", 00:14:58.947 "bdev_split_delete", 00:14:58.947 "bdev_split_create", 00:14:58.947 "bdev_delay_delete", 00:14:58.947 "bdev_delay_create", 00:14:58.947 "bdev_delay_update_latency", 00:14:58.947 "bdev_zone_block_delete", 00:14:58.947 "bdev_zone_block_create", 00:14:58.947 "blobfs_create", 00:14:58.947 "blobfs_detect", 00:14:58.947 "blobfs_set_cache_size", 00:14:58.947 "bdev_aio_delete", 00:14:58.947 "bdev_aio_rescan", 00:14:58.947 "bdev_aio_create", 00:14:58.947 "bdev_ftl_set_property", 00:14:58.947 "bdev_ftl_get_properties", 00:14:58.947 "bdev_ftl_get_stats", 00:14:58.947 "bdev_ftl_unmap", 00:14:58.947 "bdev_ftl_unload", 00:14:58.947 "bdev_ftl_delete", 00:14:58.947 "bdev_ftl_load", 00:14:58.947 "bdev_ftl_create", 00:14:58.947 "bdev_virtio_attach_controller", 00:14:58.947 "bdev_virtio_scsi_get_devices", 00:14:58.947 "bdev_virtio_detach_controller", 00:14:58.947 "bdev_virtio_blk_set_hotplug", 00:14:58.947 "bdev_iscsi_delete", 00:14:58.947 "bdev_iscsi_create", 00:14:58.947 "bdev_iscsi_set_options", 00:14:58.947 "bdev_uring_delete", 00:14:58.947 "bdev_uring_rescan", 00:14:58.947 "bdev_uring_create", 00:14:58.947 "accel_error_inject_error", 00:14:58.947 "ioat_scan_accel_module", 00:14:58.947 "dsa_scan_accel_module", 00:14:58.947 "iaa_scan_accel_module", 00:14:58.947 "vfu_virtio_create_scsi_endpoint", 00:14:58.947 "vfu_virtio_scsi_remove_target", 00:14:58.947 "vfu_virtio_scsi_add_target", 00:14:58.947 "vfu_virtio_create_blk_endpoint", 00:14:58.947 "vfu_virtio_delete_endpoint", 00:14:58.947 "keyring_file_remove_key", 00:14:58.947 "keyring_file_add_key", 00:14:58.947 "keyring_linux_set_options", 00:14:58.947 "iscsi_get_histogram", 00:14:58.947 "iscsi_enable_histogram", 00:14:58.947 "iscsi_set_options", 00:14:58.947 "iscsi_get_auth_groups", 00:14:58.947 "iscsi_auth_group_remove_secret", 00:14:58.947 "iscsi_auth_group_add_secret", 00:14:58.947 "iscsi_delete_auth_group", 00:14:58.947 "iscsi_create_auth_group", 00:14:58.947 "iscsi_set_discovery_auth", 00:14:58.947 "iscsi_get_options", 00:14:58.947 "iscsi_target_node_request_logout", 00:14:58.947 "iscsi_target_node_set_redirect", 00:14:58.947 "iscsi_target_node_set_auth", 00:14:58.947 "iscsi_target_node_add_lun", 00:14:58.947 "iscsi_get_stats", 00:14:58.947 "iscsi_get_connections", 00:14:58.947 "iscsi_portal_group_set_auth", 00:14:58.947 "iscsi_start_portal_group", 00:14:58.947 "iscsi_delete_portal_group", 00:14:58.947 "iscsi_create_portal_group", 00:14:58.947 "iscsi_get_portal_groups", 00:14:58.947 "iscsi_delete_target_node", 00:14:58.947 "iscsi_target_node_remove_pg_ig_maps", 00:14:58.947 "iscsi_target_node_add_pg_ig_maps", 00:14:58.947 "iscsi_create_target_node", 00:14:58.947 "iscsi_get_target_nodes", 00:14:58.947 "iscsi_delete_initiator_group", 00:14:58.947 "iscsi_initiator_group_remove_initiators", 00:14:58.947 "iscsi_initiator_group_add_initiators", 00:14:58.947 "iscsi_create_initiator_group", 00:14:58.947 "iscsi_get_initiator_groups", 00:14:58.947 "nvmf_set_crdt", 00:14:58.947 "nvmf_set_config", 00:14:58.947 "nvmf_set_max_subsystems", 00:14:58.947 "nvmf_stop_mdns_prr", 00:14:58.947 "nvmf_publish_mdns_prr", 00:14:58.947 "nvmf_subsystem_get_listeners", 00:14:58.947 "nvmf_subsystem_get_qpairs", 00:14:58.947 "nvmf_subsystem_get_controllers", 00:14:58.947 "nvmf_get_stats", 00:14:58.947 "nvmf_get_transports", 00:14:58.947 "nvmf_create_transport", 00:14:58.947 "nvmf_get_targets", 00:14:58.947 "nvmf_delete_target", 00:14:58.947 "nvmf_create_target", 00:14:58.947 "nvmf_subsystem_allow_any_host", 00:14:58.947 "nvmf_subsystem_remove_host", 00:14:58.947 "nvmf_subsystem_add_host", 00:14:58.947 "nvmf_ns_remove_host", 00:14:58.947 "nvmf_ns_add_host", 00:14:58.947 "nvmf_subsystem_remove_ns", 00:14:58.947 "nvmf_subsystem_add_ns", 00:14:58.947 "nvmf_subsystem_listener_set_ana_state", 00:14:58.947 "nvmf_discovery_get_referrals", 00:14:58.947 "nvmf_discovery_remove_referral", 00:14:58.947 "nvmf_discovery_add_referral", 00:14:58.947 "nvmf_subsystem_remove_listener", 00:14:58.947 "nvmf_subsystem_add_listener", 00:14:58.947 "nvmf_delete_subsystem", 00:14:58.947 "nvmf_create_subsystem", 00:14:58.947 "nvmf_get_subsystems", 00:14:58.947 "env_dpdk_get_mem_stats", 00:14:58.947 "nbd_get_disks", 00:14:58.947 "nbd_stop_disk", 00:14:58.947 "nbd_start_disk", 00:14:58.947 "ublk_recover_disk", 00:14:58.947 "ublk_get_disks", 00:14:58.947 "ublk_stop_disk", 00:14:58.947 "ublk_start_disk", 00:14:58.947 "ublk_destroy_target", 00:14:58.947 "ublk_create_target", 00:14:58.947 "virtio_blk_create_transport", 00:14:58.947 "virtio_blk_get_transports", 00:14:58.947 "vhost_controller_set_coalescing", 00:14:58.947 "vhost_get_controllers", 00:14:58.947 "vhost_delete_controller", 00:14:58.947 "vhost_create_blk_controller", 00:14:58.947 "vhost_scsi_controller_remove_target", 00:14:58.947 "vhost_scsi_controller_add_target", 00:14:58.947 "vhost_start_scsi_controller", 00:14:58.947 "vhost_create_scsi_controller", 00:14:58.947 "thread_set_cpumask", 00:14:58.947 "framework_get_governor", 00:14:58.947 "framework_get_scheduler", 00:14:58.947 "framework_set_scheduler", 00:14:58.947 "framework_get_reactors", 00:14:58.947 "thread_get_io_channels", 00:14:58.947 "thread_get_pollers", 00:14:58.947 "thread_get_stats", 00:14:58.947 "framework_monitor_context_switch", 00:14:58.947 "spdk_kill_instance", 00:14:58.947 "log_enable_timestamps", 00:14:58.947 "log_get_flags", 00:14:58.947 "log_clear_flag", 00:14:58.947 "log_set_flag", 00:14:58.947 "log_get_level", 00:14:58.947 "log_set_level", 00:14:58.947 "log_get_print_level", 00:14:58.947 "log_set_print_level", 00:14:58.947 "framework_enable_cpumask_locks", 00:14:58.947 "framework_disable_cpumask_locks", 00:14:58.947 "framework_wait_init", 00:14:58.947 "framework_start_init", 00:14:58.947 "scsi_get_devices", 00:14:58.947 "bdev_get_histogram", 00:14:58.947 "bdev_enable_histogram", 00:14:58.947 "bdev_set_qos_limit", 00:14:58.947 "bdev_set_qd_sampling_period", 00:14:58.947 "bdev_get_bdevs", 00:14:58.947 "bdev_reset_iostat", 00:14:58.947 "bdev_get_iostat", 00:14:58.947 "bdev_examine", 00:14:58.947 "bdev_wait_for_examine", 00:14:58.947 "bdev_set_options", 00:14:58.947 "notify_get_notifications", 00:14:58.947 "notify_get_types", 00:14:58.947 "accel_get_stats", 00:14:58.947 "accel_set_options", 00:14:58.947 "accel_set_driver", 00:14:58.947 "accel_crypto_key_destroy", 00:14:58.947 "accel_crypto_keys_get", 00:14:58.947 "accel_crypto_key_create", 00:14:58.947 "accel_assign_opc", 00:14:58.947 "accel_get_module_info", 00:14:58.947 "accel_get_opc_assignments", 00:14:58.947 "vmd_rescan", 00:14:58.947 "vmd_remove_device", 00:14:58.947 "vmd_enable", 00:14:58.947 "sock_get_default_impl", 00:14:58.947 "sock_set_default_impl", 00:14:58.947 "sock_impl_set_options", 00:14:58.947 "sock_impl_get_options", 00:14:58.947 "iobuf_get_stats", 00:14:58.947 "iobuf_set_options", 00:14:58.947 "keyring_get_keys", 00:14:58.947 "framework_get_pci_devices", 00:14:58.947 "framework_get_config", 00:14:58.947 "framework_get_subsystems", 00:14:58.947 "vfu_tgt_set_base_path", 00:14:58.947 "trace_get_info", 00:14:58.947 "trace_get_tpoint_group_mask", 00:14:58.947 "trace_disable_tpoint_group", 00:14:58.947 "trace_enable_tpoint_group", 00:14:58.947 "trace_clear_tpoint_mask", 00:14:58.947 "trace_set_tpoint_mask", 00:14:58.947 "spdk_get_version", 00:14:58.947 "rpc_get_methods" 00:14:58.947 ] 00:14:58.947 16:53:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:14:58.947 16:53:00 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.947 16:53:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.947 16:53:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:58.947 16:53:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60981 00:14:58.947 16:53:00 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60981 ']' 00:14:58.947 16:53:00 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60981 00:14:58.947 16:53:00 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:14:58.947 16:53:00 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.947 16:53:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60981 00:14:59.215 killing process with pid 60981 00:14:59.215 16:53:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.215 16:53:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.215 16:53:00 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60981' 00:14:59.215 16:53:00 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60981 00:14:59.215 16:53:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60981 00:15:02.498 ************************************ 00:15:02.498 END TEST spdkcli_tcp 00:15:02.498 ************************************ 00:15:02.498 00:15:02.498 real 0m4.926s 00:15:02.498 user 0m8.688s 00:15:02.498 sys 0m0.637s 00:15:02.498 16:53:03 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.498 16:53:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.498 16:53:03 -- common/autotest_common.sh@1142 -- # return 0 00:15:02.498 16:53:03 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:02.498 16:53:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:02.498 16:53:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.498 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:15:02.498 ************************************ 00:15:02.498 START TEST dpdk_mem_utility 00:15:02.498 ************************************ 00:15:02.498 16:53:03 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:02.498 * Looking for test storage... 00:15:02.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:02.498 16:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:02.498 16:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61106 00:15:02.498 16:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.498 16:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61106 00:15:02.498 16:53:03 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61106 ']' 00:15:02.498 16:53:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.498 16:53:03 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.498 16:53:03 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.498 16:53:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.498 16:53:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:02.498 [2024-07-22 16:53:03.722736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:02.498 [2024-07-22 16:53:03.722924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61106 ] 00:15:02.498 [2024-07-22 16:53:03.901521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.756 [2024-07-22 16:53:04.165759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.015 [2024-07-22 16:53:04.444110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.583 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.583 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:15:03.583 16:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:03.583 16:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:03.583 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.583 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:03.583 { 00:15:03.583 "filename": "/tmp/spdk_mem_dump.txt" 00:15:03.583 } 00:15:03.583 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.583 16:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:03.842 DPDK memory size 820.000000 MiB in 1 heap(s) 00:15:03.842 1 heaps totaling size 820.000000 MiB 00:15:03.842 size: 820.000000 MiB heap id: 0 00:15:03.842 end heaps---------- 00:15:03.842 8 mempools totaling size 598.116089 MiB 00:15:03.842 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:03.842 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:03.842 size: 84.521057 MiB name: bdev_io_61106 00:15:03.842 size: 51.011292 MiB name: evtpool_61106 00:15:03.842 size: 50.003479 MiB name: msgpool_61106 00:15:03.842 size: 21.763794 MiB name: PDU_Pool 00:15:03.842 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:03.842 size: 0.026123 MiB name: Session_Pool 00:15:03.842 end mempools------- 00:15:03.842 6 memzones totaling size 4.142822 MiB 00:15:03.842 size: 1.000366 MiB name: RG_ring_0_61106 00:15:03.842 size: 1.000366 MiB name: RG_ring_1_61106 00:15:03.842 size: 1.000366 MiB name: RG_ring_4_61106 00:15:03.842 size: 1.000366 MiB name: RG_ring_5_61106 00:15:03.842 size: 0.125366 MiB name: RG_ring_2_61106 00:15:03.842 size: 0.015991 MiB name: RG_ring_3_61106 00:15:03.842 end memzones------- 00:15:03.842 16:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:03.842 heap id: 0 total size: 820.000000 MiB number of busy elements: 288 number of free elements: 18 00:15:03.842 list of free elements. size: 18.454468 MiB 00:15:03.842 element at address: 0x200000400000 with size: 1.999451 MiB 00:15:03.842 element at address: 0x200000800000 with size: 1.996887 MiB 00:15:03.842 element at address: 0x200007000000 with size: 1.995972 MiB 00:15:03.842 element at address: 0x20000b200000 with size: 1.995972 MiB 00:15:03.842 element at address: 0x200019100040 with size: 0.999939 MiB 00:15:03.842 element at address: 0x200019500040 with size: 0.999939 MiB 00:15:03.842 element at address: 0x200019600000 with size: 0.999084 MiB 00:15:03.842 element at address: 0x200003e00000 with size: 0.996094 MiB 00:15:03.842 element at address: 0x200032200000 with size: 0.994324 MiB 00:15:03.842 element at address: 0x200018e00000 with size: 0.959656 MiB 00:15:03.842 element at address: 0x200019900040 with size: 0.936401 MiB 00:15:03.842 element at address: 0x200000200000 with size: 0.829956 MiB 00:15:03.842 element at address: 0x20001b000000 with size: 0.567078 MiB 00:15:03.842 element at address: 0x200019200000 with size: 0.487976 MiB 00:15:03.842 element at address: 0x200019a00000 with size: 0.485413 MiB 00:15:03.842 element at address: 0x200013800000 with size: 0.467896 MiB 00:15:03.842 element at address: 0x200028400000 with size: 0.390442 MiB 00:15:03.842 element at address: 0x200003a00000 with size: 0.351990 MiB 00:15:03.842 list of standard malloc elements. size: 199.281128 MiB 00:15:03.842 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:15:03.842 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:15:03.843 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:15:03.843 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:15:03.843 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:15:03.843 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:15:03.843 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:15:03.843 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:15:03.843 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:15:03.843 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:15:03.843 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:15:03.843 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003aff980 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003affa80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200003eff000 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013877c80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013877d80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013877e80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013877f80 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013878080 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013878180 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013878280 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013878380 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013878480 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200013878580 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x200019abc680 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:15:03.843 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:15:03.844 element at address: 0x200028463f40 with size: 0.000244 MiB 00:15:03.844 element at address: 0x200028464040 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846af80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b080 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b180 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b280 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b380 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b480 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b580 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b680 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b780 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b880 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846b980 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846be80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c080 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c180 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c280 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c380 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c480 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c580 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c680 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c780 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c880 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846c980 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d080 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d180 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d280 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d380 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d480 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d580 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d680 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d780 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d880 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846d980 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846da80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846db80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846de80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846df80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e080 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e180 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e280 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e380 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e480 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e580 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e680 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e780 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e880 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846e980 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f080 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f180 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f280 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f380 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f480 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f580 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f680 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f780 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f880 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846f980 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:15:03.844 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:15:03.844 list of memzone associated elements. size: 602.264404 MiB 00:15:03.844 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:15:03.844 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:03.844 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:15:03.844 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:03.844 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:15:03.845 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61106_0 00:15:03.845 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:15:03.845 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61106_0 00:15:03.845 element at address: 0x200003fff340 with size: 48.003113 MiB 00:15:03.845 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61106_0 00:15:03.845 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:15:03.845 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:03.845 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:15:03.845 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:03.845 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:15:03.845 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61106 00:15:03.845 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:15:03.845 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61106 00:15:03.845 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:15:03.845 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61106 00:15:03.845 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:15:03.845 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:03.845 element at address: 0x200019abc780 with size: 1.008179 MiB 00:15:03.845 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:03.845 element at address: 0x200018efde00 with size: 1.008179 MiB 00:15:03.845 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:03.845 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:15:03.845 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:03.845 element at address: 0x200003eff100 with size: 1.000549 MiB 00:15:03.845 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61106 00:15:03.845 element at address: 0x200003affb80 with size: 1.000549 MiB 00:15:03.845 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61106 00:15:03.845 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:15:03.845 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61106 00:15:03.845 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:15:03.845 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61106 00:15:03.845 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:15:03.845 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61106 00:15:03.845 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:15:03.845 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:03.845 element at address: 0x200013878680 with size: 0.500549 MiB 00:15:03.845 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:03.845 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:15:03.845 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:03.845 element at address: 0x200003adf740 with size: 0.125549 MiB 00:15:03.845 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61106 00:15:03.845 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:15:03.845 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:03.845 element at address: 0x200028464140 with size: 0.023804 MiB 00:15:03.845 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:03.845 element at address: 0x200003adb500 with size: 0.016174 MiB 00:15:03.845 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61106 00:15:03.845 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:15:03.845 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:03.845 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:15:03.845 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61106 00:15:03.845 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:15:03.845 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61106 00:15:03.845 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:15:03.845 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:03.845 16:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:03.845 16:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61106 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61106 ']' 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61106 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61106 00:15:03.845 killing process with pid 61106 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61106' 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61106 00:15:03.845 16:53:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61106 00:15:07.127 00:15:07.127 real 0m4.667s 00:15:07.127 user 0m4.683s 00:15:07.127 sys 0m0.595s 00:15:07.127 ************************************ 00:15:07.127 END TEST dpdk_mem_utility 00:15:07.127 ************************************ 00:15:07.127 16:53:08 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.127 16:53:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:07.127 16:53:08 -- common/autotest_common.sh@1142 -- # return 0 00:15:07.127 16:53:08 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:07.127 16:53:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:07.127 16:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.127 16:53:08 -- common/autotest_common.sh@10 -- # set +x 00:15:07.127 ************************************ 00:15:07.127 START TEST event 00:15:07.127 ************************************ 00:15:07.127 16:53:08 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:07.127 * Looking for test storage... 00:15:07.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:07.127 16:53:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:07.127 16:53:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:15:07.127 16:53:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:07.127 16:53:08 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:15:07.127 16:53:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.127 16:53:08 event -- common/autotest_common.sh@10 -- # set +x 00:15:07.127 ************************************ 00:15:07.127 START TEST event_perf 00:15:07.127 ************************************ 00:15:07.127 16:53:08 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:07.127 Running I/O for 1 seconds...[2024-07-22 16:53:08.355577] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:07.127 [2024-07-22 16:53:08.355701] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61206 ] 00:15:07.127 [2024-07-22 16:53:08.530273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:07.385 [2024-07-22 16:53:08.855708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.385 [2024-07-22 16:53:08.855875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.385 [2024-07-22 16:53:08.855943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.385 Running I/O for 1 seconds...[2024-07-22 16:53:08.855959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.760 00:15:08.760 lcore 0: 174588 00:15:08.760 lcore 1: 174589 00:15:08.760 lcore 2: 174588 00:15:08.760 lcore 3: 174587 00:15:08.760 done. 00:15:08.760 00:15:08.760 real 0m2.007s 00:15:08.760 user 0m4.725s 00:15:08.760 sys 0m0.152s 00:15:08.760 16:53:10 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.760 16:53:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:15:08.760 ************************************ 00:15:08.760 END TEST event_perf 00:15:08.760 ************************************ 00:15:08.760 16:53:10 event -- common/autotest_common.sh@1142 -- # return 0 00:15:08.760 16:53:10 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:08.760 16:53:10 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:08.760 16:53:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.760 16:53:10 event -- common/autotest_common.sh@10 -- # set +x 00:15:09.019 ************************************ 00:15:09.019 START TEST event_reactor 00:15:09.019 ************************************ 00:15:09.019 16:53:10 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:09.019 [2024-07-22 16:53:10.434861] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:09.019 [2024-07-22 16:53:10.435033] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:15:09.019 [2024-07-22 16:53:10.623538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.584 [2024-07-22 16:53:10.939221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.960 test_start 00:15:10.960 oneshot 00:15:10.960 tick 100 00:15:10.960 tick 100 00:15:10.960 tick 250 00:15:10.960 tick 100 00:15:10.960 tick 100 00:15:10.960 tick 100 00:15:10.960 tick 250 00:15:10.960 tick 500 00:15:10.960 tick 100 00:15:10.960 tick 100 00:15:10.960 tick 250 00:15:10.960 tick 100 00:15:10.960 tick 100 00:15:10.960 test_end 00:15:10.960 00:15:10.960 real 0m2.035s 00:15:10.960 user 0m1.785s 00:15:10.960 sys 0m0.139s 00:15:10.960 16:53:12 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.960 ************************************ 00:15:10.960 END TEST event_reactor 00:15:10.960 ************************************ 00:15:10.960 16:53:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:15:10.960 16:53:12 event -- common/autotest_common.sh@1142 -- # return 0 00:15:10.960 16:53:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:10.960 16:53:12 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:10.960 16:53:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.960 16:53:12 event -- common/autotest_common.sh@10 -- # set +x 00:15:10.960 ************************************ 00:15:10.960 START TEST event_reactor_perf 00:15:10.960 ************************************ 00:15:10.960 16:53:12 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:10.960 [2024-07-22 16:53:12.514703] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:10.960 [2024-07-22 16:53:12.514846] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61293 ] 00:15:11.218 [2024-07-22 16:53:12.685607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.484 [2024-07-22 16:53:12.948984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.858 test_start 00:15:12.858 test_end 00:15:12.858 Performance: 304564 events per second 00:15:12.858 00:15:12.858 real 0m1.972s 00:15:12.858 user 0m1.727s 00:15:12.858 sys 0m0.133s 00:15:12.858 16:53:14 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:12.858 16:53:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:15:12.858 ************************************ 00:15:12.858 END TEST event_reactor_perf 00:15:12.858 ************************************ 00:15:13.116 16:53:14 event -- common/autotest_common.sh@1142 -- # return 0 00:15:13.116 16:53:14 event -- event/event.sh@49 -- # uname -s 00:15:13.116 16:53:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:13.116 16:53:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:13.116 16:53:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:13.116 16:53:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.116 16:53:14 event -- common/autotest_common.sh@10 -- # set +x 00:15:13.116 ************************************ 00:15:13.116 START TEST event_scheduler 00:15:13.116 ************************************ 00:15:13.116 16:53:14 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:13.116 * Looking for test storage... 00:15:13.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:13.116 16:53:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:13.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.116 16:53:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61367 00:15:13.116 16:53:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:13.116 16:53:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:13.116 16:53:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61367 00:15:13.116 16:53:14 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 61367 ']' 00:15:13.116 16:53:14 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.116 16:53:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.116 16:53:14 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.116 16:53:14 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.116 16:53:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:13.116 [2024-07-22 16:53:14.693975] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:13.116 [2024-07-22 16:53:14.694336] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:15:13.374 [2024-07-22 16:53:14.862378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.632 [2024-07-22 16:53:15.145976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.632 [2024-07-22 16:53:15.146145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.632 [2024-07-22 16:53:15.146317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.632 [2024-07-22 16:53:15.146352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.199 16:53:15 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.199 16:53:15 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:15:14.199 16:53:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:14.199 16:53:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.199 16:53:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:14.199 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:14.199 POWER: Cannot set governor of lcore 0 to userspace 00:15:14.199 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:14.199 POWER: Cannot set governor of lcore 0 to performance 00:15:14.199 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:14.199 POWER: Cannot set governor of lcore 0 to userspace 00:15:14.199 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:14.199 POWER: Cannot set governor of lcore 0 to userspace 00:15:14.199 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:15:14.199 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:14.199 POWER: Unable to set Power Management Environment for lcore 0 00:15:14.199 [2024-07-22 16:53:15.658726] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:15:14.199 [2024-07-22 16:53:15.658822] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:15:14.199 [2024-07-22 16:53:15.658975] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:15:14.199 [2024-07-22 16:53:15.659212] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:15:14.199 [2024-07-22 16:53:15.659362] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:15:14.199 [2024-07-22 16:53:15.659512] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:15:14.199 16:53:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.199 16:53:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:14.199 16:53:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.199 16:53:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:14.457 [2024-07-22 16:53:15.906566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.457 [2024-07-22 16:53:16.039463] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:14.457 16:53:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.457 16:53:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:14.457 16:53:16 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:14.457 16:53:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.457 16:53:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:14.457 ************************************ 00:15:14.457 START TEST scheduler_create_thread 00:15:14.457 ************************************ 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.457 2 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.457 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.715 3 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.715 4 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.715 5 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.715 6 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.715 7 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:14.715 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.716 8 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.716 9 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.716 10 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.716 16:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:16.113 16:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.113 16:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:16.113 16:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:16.113 16:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.113 16:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:17.486 16:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.486 00:15:17.486 real 0m2.624s 00:15:17.486 user 0m0.019s 00:15:17.486 sys 0m0.006s 00:15:17.486 16:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.486 16:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:17.486 ************************************ 00:15:17.486 END TEST scheduler_create_thread 00:15:17.486 ************************************ 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:15:17.486 16:53:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:17.486 16:53:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61367 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 61367 ']' 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 61367 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61367 00:15:17.486 killing process with pid 61367 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61367' 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 61367 00:15:17.486 16:53:18 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 61367 00:15:17.744 [2024-07-22 16:53:19.159634] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:19.118 ************************************ 00:15:19.118 END TEST event_scheduler 00:15:19.118 ************************************ 00:15:19.118 00:15:19.118 real 0m6.175s 00:15:19.118 user 0m10.307s 00:15:19.118 sys 0m0.494s 00:15:19.118 16:53:20 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.118 16:53:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:19.118 16:53:20 event -- common/autotest_common.sh@1142 -- # return 0 00:15:19.118 16:53:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:15:19.118 16:53:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:19.118 16:53:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.118 16:53:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.118 16:53:20 event -- common/autotest_common.sh@10 -- # set +x 00:15:19.376 ************************************ 00:15:19.376 START TEST app_repeat 00:15:19.376 ************************************ 00:15:19.376 16:53:20 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61484 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:19.376 Process app_repeat pid: 61484 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61484' 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:19.376 spdk_app_start Round 0 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:19.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:19.376 16:53:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61484 /var/tmp/spdk-nbd.sock 00:15:19.376 16:53:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61484 ']' 00:15:19.377 16:53:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:19.377 16:53:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.377 16:53:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:19.377 16:53:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.377 16:53:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:19.377 [2024-07-22 16:53:20.818081] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:19.377 [2024-07-22 16:53:20.818522] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61484 ] 00:15:19.634 [2024-07-22 16:53:21.009575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:19.892 [2024-07-22 16:53:21.329903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.892 [2024-07-22 16:53:21.329910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.149 [2024-07-22 16:53:21.597644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:20.408 16:53:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.408 16:53:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:20.408 16:53:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:20.666 Malloc0 00:15:20.666 16:53:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:20.924 Malloc1 00:15:20.925 16:53:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:20.925 16:53:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:21.183 /dev/nbd0 00:15:21.183 16:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.183 16:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:21.183 1+0 records in 00:15:21.183 1+0 records out 00:15:21.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218306 s, 18.8 MB/s 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:21.183 16:53:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:21.183 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.183 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.183 16:53:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:21.441 /dev/nbd1 00:15:21.441 16:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:21.441 16:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:21.442 16:53:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:21.442 16:53:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:21.442 16:53:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:21.442 16:53:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:21.442 16:53:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:21.700 1+0 records in 00:15:21.700 1+0 records out 00:15:21.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296989 s, 13.8 MB/s 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:21.700 16:53:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:21.700 16:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.700 16:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.700 16:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:21.700 16:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.700 16:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:21.959 { 00:15:21.959 "nbd_device": "/dev/nbd0", 00:15:21.959 "bdev_name": "Malloc0" 00:15:21.959 }, 00:15:21.959 { 00:15:21.959 "nbd_device": "/dev/nbd1", 00:15:21.959 "bdev_name": "Malloc1" 00:15:21.959 } 00:15:21.959 ]' 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:21.959 { 00:15:21.959 "nbd_device": "/dev/nbd0", 00:15:21.959 "bdev_name": "Malloc0" 00:15:21.959 }, 00:15:21.959 { 00:15:21.959 "nbd_device": "/dev/nbd1", 00:15:21.959 "bdev_name": "Malloc1" 00:15:21.959 } 00:15:21.959 ]' 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:21.959 /dev/nbd1' 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:21.959 /dev/nbd1' 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:21.959 256+0 records in 00:15:21.959 256+0 records out 00:15:21.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00874023 s, 120 MB/s 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:21.959 256+0 records in 00:15:21.959 256+0 records out 00:15:21.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329225 s, 31.8 MB/s 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:21.959 16:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:22.234 256+0 records in 00:15:22.234 256+0 records out 00:15:22.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291563 s, 36.0 MB/s 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.234 16:53:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.506 16:53:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.775 16:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:23.033 16:53:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:23.033 16:53:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:23.596 16:53:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:24.971 [2024-07-22 16:53:26.545259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:25.230 [2024-07-22 16:53:26.793817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.230 [2024-07-22 16:53:26.793818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.488 [2024-07-22 16:53:27.053724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:25.488 [2024-07-22 16:53:27.053855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:25.488 [2024-07-22 16:53:27.053878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:26.423 16:53:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:26.423 spdk_app_start Round 1 00:15:26.423 16:53:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:26.423 16:53:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61484 /var/tmp/spdk-nbd.sock 00:15:26.423 16:53:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61484 ']' 00:15:26.423 16:53:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:26.423 16:53:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:26.423 16:53:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:26.423 16:53:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.423 16:53:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:26.686 16:53:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.686 16:53:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:26.686 16:53:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:26.944 Malloc0 00:15:27.201 16:53:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:27.460 Malloc1 00:15:27.460 16:53:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.460 16:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:27.460 /dev/nbd0 00:15:27.717 16:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.717 16:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.717 16:53:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:27.717 16:53:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:27.717 16:53:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:27.717 16:53:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:27.717 16:53:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:27.717 16:53:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:27.717 16:53:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:27.718 16:53:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:27.718 16:53:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:27.718 1+0 records in 00:15:27.718 1+0 records out 00:15:27.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022737 s, 18.0 MB/s 00:15:27.718 16:53:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:27.718 16:53:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:27.718 16:53:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:27.718 16:53:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:27.718 16:53:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:27.718 16:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.718 16:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.718 16:53:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:27.976 /dev/nbd1 00:15:27.976 16:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:27.976 16:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:27.976 1+0 records in 00:15:27.976 1+0 records out 00:15:27.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529943 s, 7.7 MB/s 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:27.976 16:53:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:27.976 16:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.976 16:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.976 16:53:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:27.976 16:53:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.976 16:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:28.237 { 00:15:28.237 "nbd_device": "/dev/nbd0", 00:15:28.237 "bdev_name": "Malloc0" 00:15:28.237 }, 00:15:28.237 { 00:15:28.237 "nbd_device": "/dev/nbd1", 00:15:28.237 "bdev_name": "Malloc1" 00:15:28.237 } 00:15:28.237 ]' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:28.237 { 00:15:28.237 "nbd_device": "/dev/nbd0", 00:15:28.237 "bdev_name": "Malloc0" 00:15:28.237 }, 00:15:28.237 { 00:15:28.237 "nbd_device": "/dev/nbd1", 00:15:28.237 "bdev_name": "Malloc1" 00:15:28.237 } 00:15:28.237 ]' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:28.237 /dev/nbd1' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:28.237 /dev/nbd1' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:28.237 256+0 records in 00:15:28.237 256+0 records out 00:15:28.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0075753 s, 138 MB/s 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:28.237 256+0 records in 00:15:28.237 256+0 records out 00:15:28.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320158 s, 32.8 MB/s 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:28.237 256+0 records in 00:15:28.237 256+0 records out 00:15:28.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368355 s, 28.5 MB/s 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.237 16:53:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.522 16:53:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.780 16:53:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:29.037 16:53:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:29.037 16:53:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:29.603 16:53:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:30.975 [2024-07-22 16:53:32.582992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:31.234 [2024-07-22 16:53:32.824696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.234 [2024-07-22 16:53:32.824713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.492 [2024-07-22 16:53:33.079304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.492 [2024-07-22 16:53:33.079430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:31.492 [2024-07-22 16:53:33.079446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:32.868 16:53:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:32.868 16:53:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:15:32.868 spdk_app_start Round 2 00:15:32.868 16:53:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61484 /var/tmp/spdk-nbd.sock 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61484 ']' 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.868 16:53:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:32.868 16:53:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:33.126 Malloc0 00:15:33.126 16:53:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:33.384 Malloc1 00:15:33.384 16:53:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:33.384 16:53:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:33.950 /dev/nbd0 00:15:33.950 16:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.950 16:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:33.950 1+0 records in 00:15:33.950 1+0 records out 00:15:33.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299634 s, 13.7 MB/s 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:33.950 16:53:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:33.950 16:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.950 16:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:33.950 16:53:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:34.208 /dev/nbd1 00:15:34.208 16:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:34.208 16:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:34.208 1+0 records in 00:15:34.208 1+0 records out 00:15:34.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324562 s, 12.6 MB/s 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:34.208 16:53:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:34.208 16:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.208 16:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.208 16:53:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:34.208 16:53:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.208 16:53:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:34.466 { 00:15:34.466 "nbd_device": "/dev/nbd0", 00:15:34.466 "bdev_name": "Malloc0" 00:15:34.466 }, 00:15:34.466 { 00:15:34.466 "nbd_device": "/dev/nbd1", 00:15:34.466 "bdev_name": "Malloc1" 00:15:34.466 } 00:15:34.466 ]' 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:34.466 { 00:15:34.466 "nbd_device": "/dev/nbd0", 00:15:34.466 "bdev_name": "Malloc0" 00:15:34.466 }, 00:15:34.466 { 00:15:34.466 "nbd_device": "/dev/nbd1", 00:15:34.466 "bdev_name": "Malloc1" 00:15:34.466 } 00:15:34.466 ]' 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:34.466 /dev/nbd1' 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:34.466 /dev/nbd1' 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:34.466 256+0 records in 00:15:34.466 256+0 records out 00:15:34.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00871074 s, 120 MB/s 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.466 16:53:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:34.466 256+0 records in 00:15:34.467 256+0 records out 00:15:34.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310374 s, 33.8 MB/s 00:15:34.467 16:53:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.467 16:53:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:34.467 256+0 records in 00:15:34.467 256+0 records out 00:15:34.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383034 s, 27.4 MB/s 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.467 16:53:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.725 16:53:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.995 16:53:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:35.259 16:53:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:35.259 16:53:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:35.826 16:53:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:37.783 [2024-07-22 16:53:38.991897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:37.784 [2024-07-22 16:53:39.259143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.784 [2024-07-22 16:53:39.259144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.047 [2024-07-22 16:53:39.517722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:38.047 [2024-07-22 16:53:39.517882] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:38.047 [2024-07-22 16:53:39.517905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:38.979 16:53:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61484 /var/tmp/spdk-nbd.sock 00:15:38.979 16:53:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61484 ']' 00:15:38.979 16:53:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:38.979 16:53:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:38.979 16:53:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:38.979 16:53:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.979 16:53:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:39.236 16:53:40 event.app_repeat -- event/event.sh@39 -- # killprocess 61484 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 61484 ']' 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 61484 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61484 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:39.236 killing process with pid 61484 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61484' 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@967 -- # kill 61484 00:15:39.236 16:53:40 event.app_repeat -- common/autotest_common.sh@972 -- # wait 61484 00:15:40.607 spdk_app_start is called in Round 0. 00:15:40.607 Shutdown signal received, stop current app iteration 00:15:40.607 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:15:40.607 spdk_app_start is called in Round 1. 00:15:40.607 Shutdown signal received, stop current app iteration 00:15:40.607 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:15:40.607 spdk_app_start is called in Round 2. 00:15:40.607 Shutdown signal received, stop current app iteration 00:15:40.607 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:15:40.607 spdk_app_start is called in Round 3. 00:15:40.607 Shutdown signal received, stop current app iteration 00:15:40.607 16:53:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:40.607 16:53:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:15:40.607 00:15:40.607 real 0m21.364s 00:15:40.607 user 0m44.347s 00:15:40.607 sys 0m3.460s 00:15:40.607 16:53:42 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.607 16:53:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:40.607 ************************************ 00:15:40.607 END TEST app_repeat 00:15:40.607 ************************************ 00:15:40.607 16:53:42 event -- common/autotest_common.sh@1142 -- # return 0 00:15:40.607 16:53:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:40.607 16:53:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:40.607 16:53:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:40.607 16:53:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.607 16:53:42 event -- common/autotest_common.sh@10 -- # set +x 00:15:40.607 ************************************ 00:15:40.607 START TEST cpu_locks 00:15:40.607 ************************************ 00:15:40.607 16:53:42 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:40.865 * Looking for test storage... 00:15:40.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:40.865 16:53:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:40.865 16:53:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:40.865 16:53:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:40.865 16:53:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:40.865 16:53:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:40.865 16:53:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.865 16:53:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:40.865 ************************************ 00:15:40.865 START TEST default_locks 00:15:40.865 ************************************ 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61946 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61946 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61946 ']' 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.865 16:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:40.865 [2024-07-22 16:53:42.409898] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:40.865 [2024-07-22 16:53:42.410086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61946 ] 00:15:41.124 [2024-07-22 16:53:42.591051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.382 [2024-07-22 16:53:42.864292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.639 [2024-07-22 16:53:43.135899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:42.573 16:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.573 16:53:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:15:42.573 16:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61946 00:15:42.573 16:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61946 00:15:42.573 16:53:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:42.832 16:53:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61946 00:15:42.832 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61946 ']' 00:15:42.832 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61946 00:15:42.832 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:15:43.090 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.090 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61946 00:15:43.090 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.090 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.090 killing process with pid 61946 00:15:43.090 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61946' 00:15:43.090 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61946 00:15:43.090 16:53:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61946 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61946 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61946 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61946 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61946 ']' 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:45.628 ERROR: process (pid: 61946) is no longer running 00:15:45.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61946) - No such process 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:45.628 00:15:45.628 real 0m4.913s 00:15:45.628 user 0m4.798s 00:15:45.628 sys 0m0.793s 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.628 16:53:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:45.628 ************************************ 00:15:45.628 END TEST default_locks 00:15:45.628 ************************************ 00:15:45.628 16:53:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:45.628 16:53:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:45.628 16:53:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:45.628 16:53:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.628 16:53:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:45.628 ************************************ 00:15:45.628 START TEST default_locks_via_rpc 00:15:45.628 ************************************ 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62026 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62026 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62026 ']' 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.628 16:53:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.886 [2024-07-22 16:53:47.377783] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:45.886 [2024-07-22 16:53:47.378181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62026 ] 00:15:46.143 [2024-07-22 16:53:47.560433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.401 [2024-07-22 16:53:47.816611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.660 [2024-07-22 16:53:48.084178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:47.227 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.227 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:47.227 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:47.227 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.227 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62026 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:47.484 16:53:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62026 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62026 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62026 ']' 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62026 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62026 00:15:48.050 killing process with pid 62026 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62026' 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62026 00:15:48.050 16:53:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62026 00:15:51.330 00:15:51.330 real 0m5.215s 00:15:51.330 user 0m5.138s 00:15:51.330 sys 0m0.789s 00:15:51.330 16:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.330 ************************************ 00:15:51.330 END TEST default_locks_via_rpc 00:15:51.330 ************************************ 00:15:51.330 16:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.330 16:53:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:51.330 16:53:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:51.330 16:53:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:51.330 16:53:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.330 16:53:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:51.330 ************************************ 00:15:51.330 START TEST non_locking_app_on_locked_coremask 00:15:51.330 ************************************ 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62117 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62117 /var/tmp/spdk.sock 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62117 ']' 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:51.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.330 16:53:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:51.330 [2024-07-22 16:53:52.675031] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:51.330 [2024-07-22 16:53:52.675228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62117 ] 00:15:51.330 [2024-07-22 16:53:52.863057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.596 [2024-07-22 16:53:53.206654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.162 [2024-07-22 16:53:53.501929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:52.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62139 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62139 /var/tmp/spdk2.sock 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62139 ']' 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.727 16:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:52.984 [2024-07-22 16:53:54.438548] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:52.984 [2024-07-22 16:53:54.439023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62139 ] 00:15:53.241 [2024-07-22 16:53:54.634640] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:53.241 [2024-07-22 16:53:54.634733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.807 [2024-07-22 16:53:55.203051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.373 [2024-07-22 16:53:55.764478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:55.748 16:53:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.748 16:53:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:55.748 16:53:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62117 00:15:55.748 16:53:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:55.748 16:53:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62117 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62117 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62117 ']' 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62117 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62117 00:15:57.148 killing process with pid 62117 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62117' 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62117 00:15:57.148 16:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62117 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62139 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62139 ']' 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62139 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62139 00:16:03.697 killing process with pid 62139 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62139' 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62139 00:16:03.697 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62139 00:16:06.278 00:16:06.278 real 0m14.764s 00:16:06.278 user 0m15.218s 00:16:06.278 sys 0m1.625s 00:16:06.278 16:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.278 16:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:06.278 ************************************ 00:16:06.278 END TEST non_locking_app_on_locked_coremask 00:16:06.278 ************************************ 00:16:06.278 16:54:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:06.278 16:54:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:06.278 16:54:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:06.278 16:54:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.278 16:54:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:06.278 ************************************ 00:16:06.278 START TEST locking_app_on_unlocked_coremask 00:16:06.278 ************************************ 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:16:06.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62315 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62315 /var/tmp/spdk.sock 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62315 ']' 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.278 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:06.278 [2024-07-22 16:54:07.481610] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:06.278 [2024-07-22 16:54:07.481797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62315 ] 00:16:06.278 [2024-07-22 16:54:07.679308] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:06.278 [2024-07-22 16:54:07.679397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.536 [2024-07-22 16:54:08.054743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.102 [2024-07-22 16:54:08.414113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62342 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62342 /var/tmp/spdk2.sock 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62342 ']' 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.669 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:07.927 [2024-07-22 16:54:09.350959] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:07.927 [2024-07-22 16:54:09.351531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62342 ] 00:16:08.185 [2024-07-22 16:54:09.547687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.798 [2024-07-22 16:54:10.069395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.056 [2024-07-22 16:54:10.615896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:10.956 16:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.956 16:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:10.956 16:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62342 00:16:10.956 16:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62342 00:16:10.956 16:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62315 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62315 ']' 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62315 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62315 00:16:11.894 killing process with pid 62315 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62315' 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62315 00:16:11.894 16:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62315 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62342 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62342 ']' 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62342 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62342 00:16:18.451 killing process with pid 62342 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62342' 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62342 00:16:18.451 16:54:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62342 00:16:20.979 ************************************ 00:16:20.979 END TEST locking_app_on_unlocked_coremask 00:16:20.979 ************************************ 00:16:20.979 00:16:20.979 real 0m14.864s 00:16:20.979 user 0m15.374s 00:16:20.979 sys 0m1.678s 00:16:20.979 16:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.979 16:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:20.979 16:54:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:20.979 16:54:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:16:20.980 16:54:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:20.980 16:54:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.980 16:54:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:20.980 ************************************ 00:16:20.980 START TEST locking_app_on_locked_coremask 00:16:20.980 ************************************ 00:16:20.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62518 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62518 /var/tmp/spdk.sock 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62518 ']' 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.980 16:54:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:20.980 [2024-07-22 16:54:22.394554] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:20.980 [2024-07-22 16:54:22.394740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62518 ] 00:16:20.980 [2024-07-22 16:54:22.576553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.237 [2024-07-22 16:54:22.848646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.812 [2024-07-22 16:54:23.137320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62539 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62539 /var/tmp/spdk2.sock 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62539 /var/tmp/spdk2.sock 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62539 /var/tmp/spdk2.sock 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62539 ']' 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:22.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.379 16:54:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:22.638 [2024-07-22 16:54:24.094198] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:22.638 [2024-07-22 16:54:24.094807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62539 ] 00:16:22.896 [2024-07-22 16:54:24.293519] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62518 has claimed it. 00:16:22.896 [2024-07-22 16:54:24.293632] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:23.154 ERROR: process (pid: 62539) is no longer running 00:16:23.154 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62539) - No such process 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62518 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62518 00:16:23.154 16:54:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62518 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62518 ']' 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62518 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62518 00:16:23.720 killing process with pid 62518 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62518' 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62518 00:16:23.720 16:54:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62518 00:16:27.093 ************************************ 00:16:27.093 END TEST locking_app_on_locked_coremask 00:16:27.093 ************************************ 00:16:27.093 00:16:27.093 real 0m6.007s 00:16:27.093 user 0m6.327s 00:16:27.093 sys 0m0.957s 00:16:27.093 16:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:27.093 16:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:27.093 16:54:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:27.093 16:54:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:27.093 16:54:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:27.093 16:54:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.093 16:54:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:27.093 ************************************ 00:16:27.093 START TEST locking_overlapped_coremask 00:16:27.093 ************************************ 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:16:27.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62620 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62620 /var/tmp/spdk.sock 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62620 ']' 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.093 16:54:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:27.093 [2024-07-22 16:54:28.426669] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:27.093 [2024-07-22 16:54:28.427024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62620 ] 00:16:27.093 [2024-07-22 16:54:28.599526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:27.351 [2024-07-22 16:54:28.880124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.351 [2024-07-22 16:54:28.880186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.351 [2024-07-22 16:54:28.880187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.608 [2024-07-22 16:54:29.164213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62638 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62638 /var/tmp/spdk2.sock 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62638 /var/tmp/spdk2.sock 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.574 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62638 /var/tmp/spdk2.sock 00:16:28.575 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62638 ']' 00:16:28.575 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:28.575 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.575 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:28.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:28.575 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.575 16:54:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:28.575 [2024-07-22 16:54:30.103883] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:28.575 [2024-07-22 16:54:30.104396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62638 ] 00:16:28.832 [2024-07-22 16:54:30.284699] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62620 has claimed it. 00:16:28.832 [2024-07-22 16:54:30.284801] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:29.398 ERROR: process (pid: 62638) is no longer running 00:16:29.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62638) - No such process 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62620 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 62620 ']' 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 62620 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62620 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62620' 00:16:29.398 killing process with pid 62620 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 62620 00:16:29.398 16:54:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 62620 00:16:32.800 00:16:32.800 real 0m5.505s 00:16:32.800 user 0m14.317s 00:16:32.800 sys 0m0.647s 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:32.800 ************************************ 00:16:32.800 END TEST locking_overlapped_coremask 00:16:32.800 ************************************ 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 16:54:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:32.800 16:54:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:32.800 16:54:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:32.800 16:54:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.800 16:54:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 ************************************ 00:16:32.800 START TEST locking_overlapped_coremask_via_rpc 00:16:32.800 ************************************ 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:16:32.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62713 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62713 /var/tmp/spdk.sock 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62713 ']' 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.800 16:54:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 [2024-07-22 16:54:34.024201] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:32.800 [2024-07-22 16:54:34.024382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62713 ] 00:16:32.800 [2024-07-22 16:54:34.201303] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:32.800 [2024-07-22 16:54:34.201398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.058 [2024-07-22 16:54:34.551128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.058 [2024-07-22 16:54:34.551237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.058 [2024-07-22 16:54:34.551268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.332 [2024-07-22 16:54:34.872489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:34.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62742 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62742 /var/tmp/spdk2.sock 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62742 ']' 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.282 16:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.282 [2024-07-22 16:54:35.862945] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:34.282 [2024-07-22 16:54:35.863126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62742 ] 00:16:34.540 [2024-07-22 16:54:36.056140] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:34.540 [2024-07-22 16:54:36.056239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.106 [2024-07-22 16:54:36.615842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.106 [2024-07-22 16:54:36.619464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.106 [2024-07-22 16:54:36.619490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:35.716 [2024-07-22 16:54:37.202026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.616 [2024-07-22 16:54:38.843535] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62713 has claimed it. 00:16:37.616 request: 00:16:37.616 { 00:16:37.616 "method": "framework_enable_cpumask_locks", 00:16:37.616 "req_id": 1 00:16:37.616 } 00:16:37.616 Got JSON-RPC error response 00:16:37.616 response: 00:16:37.616 { 00:16:37.616 "code": -32603, 00:16:37.616 "message": "Failed to claim CPU core: 2" 00:16:37.616 } 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62713 /var/tmp/spdk.sock 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62713 ']' 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.616 16:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.616 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.616 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:37.616 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62742 /var/tmp/spdk2.sock 00:16:37.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:37.616 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62742 ']' 00:16:37.616 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:37.616 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.616 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:37.617 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.617 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.874 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.874 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:37.875 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:37.875 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:37.875 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:37.875 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:37.875 00:16:37.875 real 0m5.544s 00:16:37.875 user 0m1.556s 00:16:37.875 sys 0m0.251s 00:16:37.875 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.875 ************************************ 00:16:37.875 END TEST locking_overlapped_coremask_via_rpc 00:16:37.875 ************************************ 00:16:37.875 16:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:37.875 16:54:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:16:37.875 16:54:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62713 ]] 00:16:37.875 16:54:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62713 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62713 ']' 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62713 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62713 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:37.875 killing process with pid 62713 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62713' 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62713 00:16:37.875 16:54:39 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62713 00:16:41.156 16:54:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62742 ]] 00:16:41.156 16:54:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62742 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62742 ']' 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62742 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62742 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:41.156 killing process with pid 62742 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62742' 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62742 00:16:41.156 16:54:42 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62742 00:16:44.440 16:54:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:16:44.440 Process with pid 62713 is not found 00:16:44.440 16:54:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:16:44.440 16:54:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62713 ]] 00:16:44.440 16:54:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62713 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62713 ']' 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62713 00:16:44.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62713) - No such process 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62713 is not found' 00:16:44.440 16:54:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62742 ]] 00:16:44.440 16:54:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62742 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62742 ']' 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62742 00:16:44.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62742) - No such process 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62742 is not found' 00:16:44.440 Process with pid 62742 is not found 00:16:44.440 16:54:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:16:44.440 00:16:44.440 real 1m3.407s 00:16:44.440 user 1m45.660s 00:16:44.440 sys 0m7.982s 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.440 ************************************ 00:16:44.440 END TEST cpu_locks 00:16:44.440 ************************************ 00:16:44.440 16:54:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:44.440 16:54:45 event -- common/autotest_common.sh@1142 -- # return 0 00:16:44.440 ************************************ 00:16:44.440 END TEST event 00:16:44.440 ************************************ 00:16:44.440 00:16:44.440 real 1m37.403s 00:16:44.440 user 2m48.695s 00:16:44.440 sys 0m12.642s 00:16:44.440 16:54:45 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.440 16:54:45 event -- common/autotest_common.sh@10 -- # set +x 00:16:44.440 16:54:45 -- common/autotest_common.sh@1142 -- # return 0 00:16:44.440 16:54:45 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:44.440 16:54:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:44.440 16:54:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.440 16:54:45 -- common/autotest_common.sh@10 -- # set +x 00:16:44.440 ************************************ 00:16:44.440 START TEST thread 00:16:44.440 ************************************ 00:16:44.440 16:54:45 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:44.440 * Looking for test storage... 00:16:44.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:16:44.440 16:54:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:44.440 16:54:45 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:44.440 16:54:45 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.440 16:54:45 thread -- common/autotest_common.sh@10 -- # set +x 00:16:44.440 ************************************ 00:16:44.440 START TEST thread_poller_perf 00:16:44.440 ************************************ 00:16:44.440 16:54:45 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:44.440 [2024-07-22 16:54:45.812740] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:44.440 [2024-07-22 16:54:45.812921] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62946 ] 00:16:44.440 [2024-07-22 16:54:46.003176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.698 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:44.698 [2024-07-22 16:54:46.276020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.597 ====================================== 00:16:46.597 busy:2113633002 (cyc) 00:16:46.597 total_run_count: 316000 00:16:46.597 tsc_hz: 2100000000 (cyc) 00:16:46.597 ====================================== 00:16:46.597 poller_cost: 6688 (cyc), 3184 (nsec) 00:16:46.597 00:16:46.597 real 0m2.044s 00:16:46.597 user 0m1.796s 00:16:46.597 sys 0m0.134s 00:16:46.597 16:54:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.597 ************************************ 00:16:46.597 END TEST thread_poller_perf 00:16:46.597 ************************************ 00:16:46.597 16:54:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:46.597 16:54:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:16:46.597 16:54:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:46.597 16:54:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:46.597 16:54:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.597 16:54:47 thread -- common/autotest_common.sh@10 -- # set +x 00:16:46.597 ************************************ 00:16:46.597 START TEST thread_poller_perf 00:16:46.597 ************************************ 00:16:46.597 16:54:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:46.597 [2024-07-22 16:54:47.905995] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:46.597 [2024-07-22 16:54:47.906516] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62988 ] 00:16:46.597 [2024-07-22 16:54:48.091408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.855 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:46.855 [2024-07-22 16:54:48.389397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.755 ====================================== 00:16:48.755 busy:2103917754 (cyc) 00:16:48.755 total_run_count: 4154000 00:16:48.755 tsc_hz: 2100000000 (cyc) 00:16:48.755 ====================================== 00:16:48.755 poller_cost: 506 (cyc), 240 (nsec) 00:16:48.755 ************************************ 00:16:48.755 END TEST thread_poller_perf 00:16:48.755 ************************************ 00:16:48.755 00:16:48.755 real 0m2.038s 00:16:48.755 user 0m1.796s 00:16:48.755 sys 0m0.130s 00:16:48.755 16:54:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:48.755 16:54:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:48.755 16:54:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:16:48.756 16:54:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:48.756 00:16:48.756 real 0m4.280s 00:16:48.756 user 0m3.665s 00:16:48.756 sys 0m0.388s 00:16:48.756 16:54:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:48.756 16:54:49 thread -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 ************************************ 00:16:48.756 END TEST thread 00:16:48.756 ************************************ 00:16:48.756 16:54:49 -- common/autotest_common.sh@1142 -- # return 0 00:16:48.756 16:54:49 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:48.756 16:54:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:48.756 16:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.756 16:54:49 -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 ************************************ 00:16:48.756 START TEST accel 00:16:48.756 ************************************ 00:16:48.756 16:54:49 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:48.756 * Looking for test storage... 00:16:48.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:48.756 16:54:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:48.756 16:54:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:16:48.756 16:54:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:48.756 16:54:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63070 00:16:48.756 16:54:50 accel -- accel/accel.sh@63 -- # waitforlisten 63070 00:16:48.756 16:54:50 accel -- common/autotest_common.sh@829 -- # '[' -z 63070 ']' 00:16:48.756 16:54:50 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.756 16:54:50 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.756 16:54:50 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.756 16:54:50 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.756 16:54:50 accel -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 16:54:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:16:48.756 16:54:50 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:16:48.756 16:54:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:48.756 16:54:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:48.756 16:54:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:48.756 16:54:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:48.756 16:54:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:48.756 16:54:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:16:48.756 16:54:50 accel -- accel/accel.sh@41 -- # jq -r . 00:16:48.756 [2024-07-22 16:54:50.253515] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:48.756 [2024-07-22 16:54:50.254014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63070 ] 00:16:49.014 [2024-07-22 16:54:50.446995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.272 [2024-07-22 16:54:50.786896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.530 [2024-07-22 16:54:51.083475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@862 -- # return 0 00:16:50.464 16:54:51 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:50.464 16:54:51 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:50.464 16:54:51 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:50.464 16:54:51 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:50.464 16:54:51 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:50.464 16:54:51 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:50.464 16:54:51 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@10 -- # set +x 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # IFS== 00:16:50.464 16:54:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:50.464 16:54:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:50.464 16:54:51 accel -- accel/accel.sh@75 -- # killprocess 63070 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@948 -- # '[' -z 63070 ']' 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@952 -- # kill -0 63070 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@953 -- # uname 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63070 00:16:50.464 killing process with pid 63070 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63070' 00:16:50.464 16:54:51 accel -- common/autotest_common.sh@967 -- # kill 63070 00:16:50.465 16:54:51 accel -- common/autotest_common.sh@972 -- # wait 63070 00:16:53.762 16:54:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:16:53.762 16:54:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:53.762 16:54:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.762 16:54:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.762 16:54:54 accel -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 16:54:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:16:53.762 16:54:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:16:53.762 16:54:55 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.762 16:54:55 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 16:54:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:53.762 16:54:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:53.762 16:54:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:53.762 16:54:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.762 16:54:55 accel -- common/autotest_common.sh@10 -- # set +x 00:16:53.762 ************************************ 00:16:53.762 START TEST accel_missing_filename 00:16:53.762 ************************************ 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.762 16:54:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:16:53.762 16:54:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:16:53.762 [2024-07-22 16:54:55.184998] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:53.762 [2024-07-22 16:54:55.185167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63162 ] 00:16:53.762 [2024-07-22 16:54:55.368225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.020 [2024-07-22 16:54:55.633530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.586 [2024-07-22 16:54:55.915193] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:55.167 [2024-07-22 16:54:56.577461] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:16:55.733 A filename is required. 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:55.733 00:16:55.733 real 0m1.956s 00:16:55.733 user 0m1.696s 00:16:55.733 sys 0m0.187s 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.733 16:54:57 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:16:55.733 ************************************ 00:16:55.733 END TEST accel_missing_filename 00:16:55.733 ************************************ 00:16:55.733 16:54:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:55.733 16:54:57 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.733 16:54:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:16:55.733 16:54:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.733 16:54:57 accel -- common/autotest_common.sh@10 -- # set +x 00:16:55.733 ************************************ 00:16:55.733 START TEST accel_compress_verify 00:16:55.733 ************************************ 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.733 16:54:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:16:55.733 16:54:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:16:55.733 [2024-07-22 16:54:57.206062] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:55.733 [2024-07-22 16:54:57.206240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63204 ] 00:16:55.991 [2024-07-22 16:54:57.392634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.249 [2024-07-22 16:54:57.734205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.506 [2024-07-22 16:54:58.011495] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.070 [2024-07-22 16:54:58.628308] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:16:57.636 00:16:57.636 Compression does not support the verify option, aborting. 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.636 00:16:57.636 real 0m1.970s 00:16:57.636 user 0m1.716s 00:16:57.636 sys 0m0.183s 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.636 16:54:59 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:16:57.636 ************************************ 00:16:57.636 END TEST accel_compress_verify 00:16:57.636 ************************************ 00:16:57.636 16:54:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:57.636 16:54:59 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:57.636 16:54:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:57.636 16:54:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.636 16:54:59 accel -- common/autotest_common.sh@10 -- # set +x 00:16:57.636 ************************************ 00:16:57.636 START TEST accel_wrong_workload 00:16:57.636 ************************************ 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.636 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:16:57.636 16:54:59 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:16:57.636 Unsupported workload type: foobar 00:16:57.636 [2024-07-22 16:54:59.221368] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:57.636 accel_perf options: 00:16:57.636 [-h help message] 00:16:57.636 [-q queue depth per core] 00:16:57.636 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:57.636 [-T number of threads per core 00:16:57.636 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:57.636 [-t time in seconds] 00:16:57.636 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:57.636 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:16:57.636 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:57.636 [-l for compress/decompress workloads, name of uncompressed input file 00:16:57.636 [-S for crc32c workload, use this seed value (default 0) 00:16:57.636 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:57.636 [-f for fill workload, use this BYTE value (default 255) 00:16:57.636 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:57.636 [-y verify result if this switch is on] 00:16:57.636 [-a tasks to allocate per core (default: same value as -q)] 00:16:57.637 Can be used to spread operations across a wider range of memory. 00:16:57.637 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:16:57.637 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.637 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:57.637 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.637 00:16:57.637 real 0m0.088s 00:16:57.637 user 0m0.078s 00:16:57.637 sys 0m0.051s 00:16:57.637 ************************************ 00:16:57.637 END TEST accel_wrong_workload 00:16:57.637 ************************************ 00:16:57.637 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.637 16:54:59 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:16:57.895 16:54:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:57.895 16:54:59 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:57.895 16:54:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:16:57.895 16:54:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.895 16:54:59 accel -- common/autotest_common.sh@10 -- # set +x 00:16:57.895 ************************************ 00:16:57.895 START TEST accel_negative_buffers 00:16:57.895 ************************************ 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.895 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:16:57.895 16:54:59 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:16:57.895 -x option must be non-negative. 00:16:57.895 [2024-07-22 16:54:59.364781] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:57.895 accel_perf options: 00:16:57.895 [-h help message] 00:16:57.896 [-q queue depth per core] 00:16:57.896 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:57.896 [-T number of threads per core 00:16:57.896 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:57.896 [-t time in seconds] 00:16:57.896 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:57.896 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:16:57.896 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:57.896 [-l for compress/decompress workloads, name of uncompressed input file 00:16:57.896 [-S for crc32c workload, use this seed value (default 0) 00:16:57.896 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:57.896 [-f for fill workload, use this BYTE value (default 255) 00:16:57.896 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:57.896 [-y verify result if this switch is on] 00:16:57.896 [-a tasks to allocate per core (default: same value as -q)] 00:16:57.896 Can be used to spread operations across a wider range of memory. 00:16:57.896 ************************************ 00:16:57.896 END TEST accel_negative_buffers 00:16:57.896 ************************************ 00:16:57.896 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:16:57.896 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.896 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:57.896 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.896 00:16:57.896 real 0m0.089s 00:16:57.896 user 0m0.080s 00:16:57.896 sys 0m0.053s 00:16:57.896 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.896 16:54:59 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:16:57.896 16:54:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:57.896 16:54:59 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:57.896 16:54:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:57.896 16:54:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.896 16:54:59 accel -- common/autotest_common.sh@10 -- # set +x 00:16:57.896 ************************************ 00:16:57.896 START TEST accel_crc32c 00:16:57.896 ************************************ 00:16:57.896 16:54:59 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:57.896 16:54:59 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:57.896 [2024-07-22 16:54:59.497224] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:57.896 [2024-07-22 16:54:59.497391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63282 ] 00:16:58.155 [2024-07-22 16:54:59.652924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.413 [2024-07-22 16:54:59.895649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.672 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:58.673 16:55:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:01.202 16:55:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:01.202 16:55:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:01.202 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:01.202 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:01.202 16:55:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:01.202 16:55:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:01.202 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:17:01.203 ************************************ 00:17:01.203 END TEST accel_crc32c 00:17:01.203 ************************************ 00:17:01.203 16:55:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:01.203 00:17:01.203 real 0m2.902s 00:17:01.203 user 0m2.635s 00:17:01.203 sys 0m0.172s 00:17:01.203 16:55:02 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.203 16:55:02 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:17:01.203 16:55:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:01.203 16:55:02 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:17:01.203 16:55:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:17:01.203 16:55:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.203 16:55:02 accel -- common/autotest_common.sh@10 -- # set +x 00:17:01.203 ************************************ 00:17:01.203 START TEST accel_crc32c_C2 00:17:01.203 ************************************ 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:17:01.203 16:55:02 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:17:01.203 [2024-07-22 16:55:02.465323] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:01.203 [2024-07-22 16:55:02.465491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63334 ] 00:17:01.203 [2024-07-22 16:55:02.649825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.461 [2024-07-22 16:55:02.977334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:01.719 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.720 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:01.720 16:55:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:04.244 00:17:04.244 real 0m3.017s 00:17:04.244 user 0m2.724s 00:17:04.244 sys 0m0.198s 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.244 16:55:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:17:04.244 ************************************ 00:17:04.244 END TEST accel_crc32c_C2 00:17:04.244 ************************************ 00:17:04.244 16:55:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:04.244 16:55:05 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:17:04.244 16:55:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:04.244 16:55:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.244 16:55:05 accel -- common/autotest_common.sh@10 -- # set +x 00:17:04.244 ************************************ 00:17:04.244 START TEST accel_copy 00:17:04.244 ************************************ 00:17:04.244 16:55:05 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:17:04.244 16:55:05 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:17:04.244 [2024-07-22 16:55:05.539729] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:04.244 [2024-07-22 16:55:05.539910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63386 ] 00:17:04.244 [2024-07-22 16:55:05.726288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.502 [2024-07-22 16:55:06.073356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.760 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:04.761 16:55:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.289 16:55:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:17:07.290 16:55:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:07.290 00:17:07.290 real 0m3.065s 00:17:07.290 user 0m2.762s 00:17:07.290 sys 0m0.203s 00:17:07.290 16:55:08 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.290 16:55:08 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:17:07.290 ************************************ 00:17:07.290 END TEST accel_copy 00:17:07.290 ************************************ 00:17:07.290 16:55:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:07.290 16:55:08 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:07.290 16:55:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:07.290 16:55:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.290 16:55:08 accel -- common/autotest_common.sh@10 -- # set +x 00:17:07.290 ************************************ 00:17:07.290 START TEST accel_fill 00:17:07.290 ************************************ 00:17:07.290 16:55:08 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:17:07.290 16:55:08 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:17:07.290 [2024-07-22 16:55:08.656465] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:07.290 [2024-07-22 16:55:08.656639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63438 ] 00:17:07.290 [2024-07-22 16:55:08.844910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.856 [2024-07-22 16:55:09.181930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:08.113 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.114 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.114 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:08.114 16:55:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:08.114 16:55:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:08.114 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:08.114 16:55:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:17:10.639 16:55:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:10.639 00:17:10.639 real 0m3.122s 00:17:10.639 user 0m0.015s 00:17:10.639 sys 0m0.000s 00:17:10.639 16:55:11 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.639 ************************************ 00:17:10.639 END TEST accel_fill 00:17:10.639 ************************************ 00:17:10.639 16:55:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:17:10.639 16:55:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:10.639 16:55:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:17:10.639 16:55:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:10.639 16:55:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.639 16:55:11 accel -- common/autotest_common.sh@10 -- # set +x 00:17:10.639 ************************************ 00:17:10.639 START TEST accel_copy_crc32c 00:17:10.639 ************************************ 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:17:10.639 16:55:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:17:10.639 [2024-07-22 16:55:11.825882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:10.639 [2024-07-22 16:55:11.826045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63490 ] 00:17:10.639 [2024-07-22 16:55:11.998690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.896 [2024-07-22 16:55:12.280138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.154 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:11.155 16:55:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:13.685 00:17:13.685 real 0m3.005s 00:17:13.685 user 0m2.704s 00:17:13.685 sys 0m0.197s 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.685 16:55:14 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:17:13.685 ************************************ 00:17:13.685 END TEST accel_copy_crc32c 00:17:13.685 ************************************ 00:17:13.685 16:55:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:13.685 16:55:14 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:17:13.685 16:55:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:17:13.685 16:55:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.685 16:55:14 accel -- common/autotest_common.sh@10 -- # set +x 00:17:13.685 ************************************ 00:17:13.685 START TEST accel_copy_crc32c_C2 00:17:13.685 ************************************ 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:17:13.685 16:55:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:17:13.685 [2024-07-22 16:55:14.868129] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:13.685 [2024-07-22 16:55:14.868321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63538 ] 00:17:13.686 [2024-07-22 16:55:15.048546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.950 [2024-07-22 16:55:15.388977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.207 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:14.207 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:14.208 16:55:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.742 00:17:16.742 real 0m3.072s 00:17:16.742 user 0m0.019s 00:17:16.742 sys 0m0.002s 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.742 ************************************ 00:17:16.742 END TEST accel_copy_crc32c_C2 00:17:16.742 ************************************ 00:17:16.742 16:55:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:17:16.742 16:55:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:16.742 16:55:17 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:17:16.742 16:55:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:16.742 16:55:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.742 16:55:17 accel -- common/autotest_common.sh@10 -- # set +x 00:17:16.742 ************************************ 00:17:16.742 START TEST accel_dualcast 00:17:16.742 ************************************ 00:17:16.742 16:55:17 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:16.742 16:55:17 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:16.743 16:55:17 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:17:16.743 16:55:17 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:17:16.743 [2024-07-22 16:55:17.978099] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:16.743 [2024-07-22 16:55:17.978259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63589 ] 00:17:16.743 [2024-07-22 16:55:18.153872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.000 [2024-07-22 16:55:18.439341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:17.259 16:55:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 ************************************ 00:17:19.788 END TEST accel_dualcast 00:17:19.788 ************************************ 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:17:19.788 16:55:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:19.788 00:17:19.788 real 0m3.012s 00:17:19.788 user 0m0.017s 00:17:19.788 sys 0m0.004s 00:17:19.788 16:55:20 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.788 16:55:20 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:17:19.788 16:55:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:19.788 16:55:20 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:17:19.788 16:55:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:19.788 16:55:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.788 16:55:20 accel -- common/autotest_common.sh@10 -- # set +x 00:17:19.788 ************************************ 00:17:19.788 START TEST accel_compare 00:17:19.788 ************************************ 00:17:19.788 16:55:20 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:17:19.788 16:55:20 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:17:19.788 [2024-07-22 16:55:21.043356] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:19.788 [2024-07-22 16:55:21.043618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63641 ] 00:17:19.788 [2024-07-22 16:55:21.231675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.045 [2024-07-22 16:55:21.507956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:20.303 16:55:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:17:22.884 ************************************ 00:17:22.884 END TEST accel_compare 00:17:22.884 ************************************ 00:17:22.884 16:55:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:22.884 00:17:22.884 real 0m3.017s 00:17:22.884 user 0m2.690s 00:17:22.884 sys 0m0.221s 00:17:22.884 16:55:23 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.884 16:55:23 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:17:22.884 16:55:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:22.884 16:55:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:17:22.884 16:55:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:22.884 16:55:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.884 16:55:24 accel -- common/autotest_common.sh@10 -- # set +x 00:17:22.884 ************************************ 00:17:22.884 START TEST accel_xor 00:17:22.884 ************************************ 00:17:22.884 16:55:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:17:22.884 16:55:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:17:22.884 [2024-07-22 16:55:24.104606] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:22.884 [2024-07-22 16:55:24.104854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63693 ] 00:17:22.884 [2024-07-22 16:55:24.288511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.142 [2024-07-22 16:55:24.636635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:17:23.400 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:23.401 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:25.933 16:55:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:25.933 00:17:25.933 real 0m3.114s 00:17:25.933 user 0m0.021s 00:17:25.933 sys 0m0.005s 00:17:25.933 ************************************ 00:17:25.933 END TEST accel_xor 00:17:25.933 16:55:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.933 16:55:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:17:25.933 ************************************ 00:17:25.933 16:55:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:25.933 16:55:27 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:17:25.933 16:55:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:17:25.933 16:55:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.934 16:55:27 accel -- common/autotest_common.sh@10 -- # set +x 00:17:25.934 ************************************ 00:17:25.934 START TEST accel_xor 00:17:25.934 ************************************ 00:17:25.934 16:55:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:17:25.934 16:55:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:17:25.934 [2024-07-22 16:55:27.267193] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:25.934 [2024-07-22 16:55:27.267475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63745 ] 00:17:25.934 [2024-07-22 16:55:27.449013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.210 [2024-07-22 16:55:27.722783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:17:26.468 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:26.469 16:55:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:28.996 16:55:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:28.996 00:17:28.996 real 0m3.069s 00:17:28.996 user 0m0.011s 00:17:28.996 sys 0m0.002s 00:17:28.996 16:55:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.996 16:55:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:17:28.996 ************************************ 00:17:28.996 END TEST accel_xor 00:17:28.996 ************************************ 00:17:28.996 16:55:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:28.996 16:55:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:17:28.996 16:55:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:17:28.996 16:55:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.996 16:55:30 accel -- common/autotest_common.sh@10 -- # set +x 00:17:28.996 ************************************ 00:17:28.996 START TEST accel_dif_verify 00:17:28.996 ************************************ 00:17:28.996 16:55:30 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:17:28.996 16:55:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:17:28.996 [2024-07-22 16:55:30.374814] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:28.996 [2024-07-22 16:55:30.374981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63797 ] 00:17:28.996 [2024-07-22 16:55:30.551021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.561 [2024-07-22 16:55:30.911828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:29.838 16:55:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:17:32.382 16:55:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:32.382 ************************************ 00:17:32.382 END TEST accel_dif_verify 00:17:32.382 ************************************ 00:17:32.382 00:17:32.382 real 0m3.082s 00:17:32.382 user 0m2.753s 00:17:32.382 sys 0m0.224s 00:17:32.382 16:55:33 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.382 16:55:33 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:17:32.382 16:55:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:32.382 16:55:33 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:17:32.382 16:55:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:17:32.382 16:55:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.382 16:55:33 accel -- common/autotest_common.sh@10 -- # set +x 00:17:32.382 ************************************ 00:17:32.382 START TEST accel_dif_generate 00:17:32.382 ************************************ 00:17:32.382 16:55:33 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:17:32.382 16:55:33 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:17:32.382 [2024-07-22 16:55:33.509685] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:32.382 [2024-07-22 16:55:33.509864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63855 ] 00:17:32.382 [2024-07-22 16:55:33.675942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.382 [2024-07-22 16:55:33.957925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.949 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:32.950 16:55:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:34.847 16:55:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:34.848 16:55:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:34.848 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:34.848 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:34.848 16:55:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:34.848 16:55:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:34.848 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:34.848 16:55:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:35.117 16:55:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:35.117 16:55:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:17:35.117 16:55:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:35.117 00:17:35.117 real 0m3.008s 00:17:35.117 user 0m0.016s 00:17:35.117 sys 0m0.005s 00:17:35.117 16:55:36 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.117 ************************************ 00:17:35.117 END TEST accel_dif_generate 00:17:35.117 ************************************ 00:17:35.117 16:55:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:17:35.117 16:55:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:35.117 16:55:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:17:35.117 16:55:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:17:35.117 16:55:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.117 16:55:36 accel -- common/autotest_common.sh@10 -- # set +x 00:17:35.117 ************************************ 00:17:35.117 START TEST accel_dif_generate_copy 00:17:35.117 ************************************ 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:17:35.117 16:55:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:17:35.117 [2024-07-22 16:55:36.595683] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:35.117 [2024-07-22 16:55:36.596191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63902 ] 00:17:35.383 [2024-07-22 16:55:36.776532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.640 [2024-07-22 16:55:37.057189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.897 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:35.897 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.897 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.897 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.897 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:35.897 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:35.898 16:55:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:38.424 00:17:38.424 real 0m3.010s 00:17:38.424 user 0m2.711s 00:17:38.424 sys 0m0.194s 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:38.424 16:55:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:17:38.424 ************************************ 00:17:38.424 END TEST accel_dif_generate_copy 00:17:38.424 ************************************ 00:17:38.424 16:55:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:38.424 16:55:39 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:17:38.424 16:55:39 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:38.424 16:55:39 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:17:38.424 16:55:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.424 16:55:39 accel -- common/autotest_common.sh@10 -- # set +x 00:17:38.424 ************************************ 00:17:38.424 START TEST accel_comp 00:17:38.424 ************************************ 00:17:38.424 16:55:39 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:17:38.424 16:55:39 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:17:38.424 [2024-07-22 16:55:39.628389] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:38.424 [2024-07-22 16:55:39.628532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63950 ] 00:17:38.424 [2024-07-22 16:55:39.796499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.683 [2024-07-22 16:55:40.080370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.941 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:38.942 16:55:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:41.504 16:55:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:17:41.504 ************************************ 00:17:41.504 END TEST accel_comp 00:17:41.505 ************************************ 00:17:41.505 16:55:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:41.505 00:17:41.505 real 0m3.031s 00:17:41.505 user 0m2.723s 00:17:41.505 sys 0m0.207s 00:17:41.505 16:55:42 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:41.505 16:55:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:17:41.505 16:55:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:41.505 16:55:42 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:41.505 16:55:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:17:41.505 16:55:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:41.505 16:55:42 accel -- common/autotest_common.sh@10 -- # set +x 00:17:41.505 ************************************ 00:17:41.505 START TEST accel_decomp 00:17:41.505 ************************************ 00:17:41.505 16:55:42 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:17:41.505 16:55:42 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:17:41.505 [2024-07-22 16:55:42.723771] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:41.505 [2024-07-22 16:55:42.724028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64006 ] 00:17:41.505 [2024-07-22 16:55:42.919584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.763 [2024-07-22 16:55:43.259022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:42.022 16:55:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:44.553 16:55:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:44.553 00:17:44.553 real 0m3.123s 00:17:44.553 user 0m0.017s 00:17:44.553 sys 0m0.003s 00:17:44.553 16:55:45 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:44.553 ************************************ 00:17:44.553 END TEST accel_decomp 00:17:44.553 ************************************ 00:17:44.553 16:55:45 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:17:44.553 16:55:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:44.553 16:55:45 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:44.553 16:55:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:44.553 16:55:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.553 16:55:45 accel -- common/autotest_common.sh@10 -- # set +x 00:17:44.553 ************************************ 00:17:44.553 START TEST accel_decomp_full 00:17:44.553 ************************************ 00:17:44.553 16:55:45 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:17:44.553 16:55:45 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:17:44.553 [2024-07-22 16:55:45.886371] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:44.553 [2024-07-22 16:55:45.886526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64059 ] 00:17:44.553 [2024-07-22 16:55:46.063422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.823 [2024-07-22 16:55:46.401059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:45.081 16:55:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:47.610 16:55:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:47.611 16:55:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:47.611 00:17:47.611 real 0m3.120s 00:17:47.611 user 0m2.788s 00:17:47.611 sys 0m0.216s 00:17:47.611 16:55:48 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.611 16:55:48 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:17:47.611 ************************************ 00:17:47.611 END TEST accel_decomp_full 00:17:47.611 ************************************ 00:17:47.611 16:55:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:47.611 16:55:48 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:47.611 16:55:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:47.611 16:55:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.611 16:55:48 accel -- common/autotest_common.sh@10 -- # set +x 00:17:47.611 ************************************ 00:17:47.611 START TEST accel_decomp_mcore 00:17:47.611 ************************************ 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:17:47.611 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:17:47.611 [2024-07-22 16:55:49.052933] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:47.611 [2024-07-22 16:55:49.053083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64111 ] 00:17:47.611 [2024-07-22 16:55:49.227581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.178 [2024-07-22 16:55:49.510212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.178 [2024-07-22 16:55:49.510318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.178 [2024-07-22 16:55:49.510939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.178 [2024-07-22 16:55:49.510943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.178 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.436 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.437 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.437 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:48.437 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:48.437 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:48.437 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:48.437 16:55:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:50.965 00:17:50.965 real 0m3.017s 00:17:50.965 user 0m0.026s 00:17:50.965 sys 0m0.002s 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.965 ************************************ 00:17:50.965 END TEST accel_decomp_mcore 00:17:50.965 16:55:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:17:50.965 ************************************ 00:17:50.965 16:55:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:50.965 16:55:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:50.965 16:55:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:50.965 16:55:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.965 16:55:52 accel -- common/autotest_common.sh@10 -- # set +x 00:17:50.965 ************************************ 00:17:50.965 START TEST accel_decomp_full_mcore 00:17:50.965 ************************************ 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:17:50.965 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:17:50.965 [2024-07-22 16:55:52.140722] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:50.965 [2024-07-22 16:55:52.140896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64166 ] 00:17:50.965 [2024-07-22 16:55:52.324674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.223 [2024-07-22 16:55:52.631777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.223 [2024-07-22 16:55:52.631859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.223 [2024-07-22 16:55:52.631954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.223 [2024-07-22 16:55:52.632267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.480 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.480 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.480 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:51.481 16:55:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:54.010 00:17:54.010 real 0m3.054s 00:17:54.010 user 0m0.012s 00:17:54.010 sys 0m0.006s 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.010 ************************************ 00:17:54.010 END TEST accel_decomp_full_mcore 00:17:54.010 ************************************ 00:17:54.010 16:55:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:17:54.010 16:55:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:54.010 16:55:55 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:54.010 16:55:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:54.010 16:55:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.010 16:55:55 accel -- common/autotest_common.sh@10 -- # set +x 00:17:54.010 ************************************ 00:17:54.010 START TEST accel_decomp_mthread 00:17:54.010 ************************************ 00:17:54.010 16:55:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:54.010 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:17:54.010 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:17:54.010 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.010 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.010 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:54.010 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:17:54.011 16:55:55 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:17:54.011 [2024-07-22 16:55:55.244778] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:54.011 [2024-07-22 16:55:55.245185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64222 ] 00:17:54.011 [2024-07-22 16:55:55.452873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.270 [2024-07-22 16:55:55.791838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.529 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.530 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.530 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.530 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:54.530 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:54.530 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:54.530 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:54.530 16:55:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.060 00:17:57.060 real 0m3.132s 00:17:57.060 user 0m2.811s 00:17:57.060 sys 0m0.212s 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.060 16:55:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:17:57.060 ************************************ 00:17:57.060 END TEST accel_decomp_mthread 00:17:57.060 ************************************ 00:17:57.060 16:55:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:57.060 16:55:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:57.060 16:55:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:57.060 16:55:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.060 16:55:58 accel -- common/autotest_common.sh@10 -- # set +x 00:17:57.060 ************************************ 00:17:57.060 START TEST accel_decomp_full_mthread 00:17:57.060 ************************************ 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:17:57.060 16:55:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:17:57.060 [2024-07-22 16:55:58.413405] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:57.060 [2024-07-22 16:55:58.413547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64274 ] 00:17:57.060 [2024-07-22 16:55:58.581445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.383 [2024-07-22 16:55:58.846675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.641 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:57.642 16:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.215 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:00.215 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:00.215 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:00.215 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.215 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:00.215 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:00.215 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:00.216 00:18:00.216 real 0m2.931s 00:18:00.216 user 0m0.013s 00:18:00.216 sys 0m0.003s 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.216 16:56:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:18:00.216 ************************************ 00:18:00.216 END TEST accel_decomp_full_mthread 00:18:00.216 ************************************ 00:18:00.216 16:56:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:00.216 16:56:01 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:18:00.216 16:56:01 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:18:00.216 16:56:01 accel -- accel/accel.sh@137 -- # build_accel_config 00:18:00.216 16:56:01 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:00.216 16:56:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.216 16:56:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:00.216 16:56:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:00.216 16:56:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:00.216 16:56:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:00.216 16:56:01 accel -- common/autotest_common.sh@10 -- # set +x 00:18:00.216 16:56:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:00.216 16:56:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:18:00.216 16:56:01 accel -- accel/accel.sh@41 -- # jq -r . 00:18:00.216 ************************************ 00:18:00.216 START TEST accel_dif_functional_tests 00:18:00.216 ************************************ 00:18:00.216 16:56:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:18:00.216 [2024-07-22 16:56:01.477354] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:00.216 [2024-07-22 16:56:01.477535] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64327 ] 00:18:00.216 [2024-07-22 16:56:01.663601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:00.474 [2024-07-22 16:56:01.947027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.474 [2024-07-22 16:56:01.947121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.474 [2024-07-22 16:56:01.947106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.732 [2024-07-22 16:56:02.268407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:00.991 00:18:00.991 00:18:00.991 CUnit - A unit testing framework for C - Version 2.1-3 00:18:00.991 http://cunit.sourceforge.net/ 00:18:00.991 00:18:00.991 00:18:00.991 Suite: accel_dif 00:18:00.991 Test: verify: DIF generated, GUARD check ...passed 00:18:00.991 Test: verify: DIF generated, APPTAG check ...passed 00:18:00.991 Test: verify: DIF generated, REFTAG check ...passed 00:18:00.991 Test: verify: DIF not generated, GUARD check ...passed 00:18:00.991 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 16:56:02.417405] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:18:00.991 [2024-07-22 16:56:02.417522] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:18:00.991 passed 00:18:00.991 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 16:56:02.417657] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:18:00.991 passed 00:18:00.991 Test: verify: APPTAG correct, APPTAG check ...passed 00:18:00.991 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:18:00.991 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-22 16:56:02.417875] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:18:00.991 passed 00:18:00.991 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:18:00.991 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:18:00.991 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 16:56:02.418275] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:18:00.991 passed 00:18:00.991 Test: verify copy: DIF generated, GUARD check ...passed 00:18:00.991 Test: verify copy: DIF generated, APPTAG check ...passed 00:18:00.991 Test: verify copy: DIF generated, REFTAG check ...passed 00:18:00.991 Test: verify copy: DIF not generated, GUARD check ...passed 00:18:00.991 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 16:56:02.418671] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:18:00.991 [2024-07-22 16:56:02.418792] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:18:00.991 passed 00:18:00.991 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 16:56:02.418905] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:18:00.991 passed 00:18:00.991 Test: generate copy: DIF generated, GUARD check ...passed 00:18:00.991 Test: generate copy: DIF generated, APTTAG check ...passed 00:18:00.991 Test: generate copy: DIF generated, REFTAG check ...passed 00:18:00.991 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:18:00.991 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:18:00.991 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:18:00.991 Test: generate copy: iovecs-len validate ...passed 00:18:00.991 Test: generate copy: buffer alignment validate ...[2024-07-22 16:56:02.419574] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:18:00.991 passed 00:18:00.991 00:18:00.991 Run Summary: Type Total Ran Passed Failed Inactive 00:18:00.991 suites 1 1 n/a 0 0 00:18:00.991 tests 26 26 26 0 0 00:18:00.991 asserts 115 115 115 0 n/a 00:18:00.991 00:18:00.991 Elapsed time = 0.007 seconds 00:18:02.893 00:18:02.893 real 0m2.654s 00:18:02.893 user 0m5.221s 00:18:02.893 sys 0m0.317s 00:18:02.893 16:56:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.893 ************************************ 00:18:02.893 END TEST accel_dif_functional_tests 00:18:02.893 ************************************ 00:18:02.893 16:56:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:18:02.893 16:56:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:02.893 ************************************ 00:18:02.893 END TEST accel 00:18:02.893 ************************************ 00:18:02.893 00:18:02.893 real 1m14.046s 00:18:02.893 user 1m20.664s 00:18:02.893 sys 0m6.382s 00:18:02.893 16:56:04 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.893 16:56:04 accel -- common/autotest_common.sh@10 -- # set +x 00:18:02.893 16:56:04 -- common/autotest_common.sh@1142 -- # return 0 00:18:02.893 16:56:04 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:18:02.893 16:56:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:02.893 16:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.894 16:56:04 -- common/autotest_common.sh@10 -- # set +x 00:18:02.894 ************************************ 00:18:02.894 START TEST accel_rpc 00:18:02.894 ************************************ 00:18:02.894 16:56:04 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:18:02.894 * Looking for test storage... 00:18:02.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:18:02.894 16:56:04 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:02.894 16:56:04 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64416 00:18:02.894 16:56:04 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:02.894 16:56:04 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64416 00:18:02.894 16:56:04 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64416 ']' 00:18:02.894 16:56:04 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.894 16:56:04 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.894 16:56:04 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.894 16:56:04 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.894 16:56:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.894 [2024-07-22 16:56:04.352908] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:02.894 [2024-07-22 16:56:04.353703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64416 ] 00:18:03.152 [2024-07-22 16:56:04.545196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.411 [2024-07-22 16:56:04.826602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.669 16:56:05 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.669 16:56:05 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:18:03.669 16:56:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:18:03.669 16:56:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:18:03.669 16:56:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:18:03.669 16:56:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:18:03.669 16:56:05 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:18:03.669 16:56:05 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:03.669 16:56:05 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.669 16:56:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 ************************************ 00:18:03.669 START TEST accel_assign_opcode 00:18:03.669 ************************************ 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 [2024-07-22 16:56:05.227626] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 [2024-07-22 16:56:05.235653] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.669 16:56:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:03.940 [2024-07-22 16:56:05.531807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.874 software 00:18:04.874 00:18:04.874 real 0m1.122s 00:18:04.874 user 0m0.047s 00:18:04.874 sys 0m0.009s 00:18:04.874 ************************************ 00:18:04.874 END TEST accel_assign_opcode 00:18:04.874 ************************************ 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:04.874 16:56:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:18:04.874 16:56:06 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64416 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64416 ']' 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64416 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64416 00:18:04.874 killing process with pid 64416 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64416' 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@967 -- # kill 64416 00:18:04.874 16:56:06 accel_rpc -- common/autotest_common.sh@972 -- # wait 64416 00:18:08.198 00:18:08.198 real 0m5.262s 00:18:08.198 user 0m5.148s 00:18:08.198 sys 0m0.618s 00:18:08.198 16:56:09 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.198 16:56:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.198 ************************************ 00:18:08.198 END TEST accel_rpc 00:18:08.198 ************************************ 00:18:08.198 16:56:09 -- common/autotest_common.sh@1142 -- # return 0 00:18:08.198 16:56:09 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:08.198 16:56:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:08.198 16:56:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.198 16:56:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.198 ************************************ 00:18:08.198 START TEST app_cmdline 00:18:08.198 ************************************ 00:18:08.198 16:56:09 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:08.198 * Looking for test storage... 00:18:08.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:08.198 16:56:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:08.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.198 16:56:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64543 00:18:08.198 16:56:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64543 00:18:08.198 16:56:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:08.198 16:56:09 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64543 ']' 00:18:08.198 16:56:09 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.198 16:56:09 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.198 16:56:09 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.198 16:56:09 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.198 16:56:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:08.198 [2024-07-22 16:56:09.629954] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:08.198 [2024-07-22 16:56:09.631004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64543 ] 00:18:08.198 [2024-07-22 16:56:09.805291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.457 [2024-07-22 16:56:10.071895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.021 [2024-07-22 16:56:10.350333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:09.631 16:56:11 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.631 16:56:11 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:18:09.631 16:56:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:09.888 { 00:18:09.888 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:18:09.888 "fields": { 00:18:09.888 "major": 24, 00:18:09.888 "minor": 9, 00:18:09.888 "patch": 0, 00:18:09.888 "suffix": "-pre", 00:18:09.888 "commit": "f7b31b2b9" 00:18:09.888 } 00:18:09.888 } 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:09.888 16:56:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:09.888 16:56:11 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:10.145 request: 00:18:10.145 { 00:18:10.145 "method": "env_dpdk_get_mem_stats", 00:18:10.145 "req_id": 1 00:18:10.145 } 00:18:10.145 Got JSON-RPC error response 00:18:10.145 response: 00:18:10.145 { 00:18:10.145 "code": -32601, 00:18:10.145 "message": "Method not found" 00:18:10.145 } 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.145 16:56:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64543 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64543 ']' 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64543 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64543 00:18:10.145 killing process with pid 64543 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64543' 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@967 -- # kill 64543 00:18:10.145 16:56:11 app_cmdline -- common/autotest_common.sh@972 -- # wait 64543 00:18:13.472 ************************************ 00:18:13.472 END TEST app_cmdline 00:18:13.472 ************************************ 00:18:13.472 00:18:13.472 real 0m5.292s 00:18:13.472 user 0m5.587s 00:18:13.472 sys 0m0.632s 00:18:13.472 16:56:14 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.472 16:56:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:13.472 16:56:14 -- common/autotest_common.sh@1142 -- # return 0 00:18:13.472 16:56:14 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:13.472 16:56:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:13.472 16:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.472 16:56:14 -- common/autotest_common.sh@10 -- # set +x 00:18:13.472 ************************************ 00:18:13.472 START TEST version 00:18:13.472 ************************************ 00:18:13.472 16:56:14 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:13.472 * Looking for test storage... 00:18:13.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:13.472 16:56:14 version -- app/version.sh@17 -- # get_header_version major 00:18:13.472 16:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # cut -f2 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.472 16:56:14 version -- app/version.sh@17 -- # major=24 00:18:13.472 16:56:14 version -- app/version.sh@18 -- # get_header_version minor 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # cut -f2 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.472 16:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.472 16:56:14 version -- app/version.sh@18 -- # minor=9 00:18:13.472 16:56:14 version -- app/version.sh@19 -- # get_header_version patch 00:18:13.472 16:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # cut -f2 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.472 16:56:14 version -- app/version.sh@19 -- # patch=0 00:18:13.472 16:56:14 version -- app/version.sh@20 -- # get_header_version suffix 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:18:13.472 16:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:13.472 16:56:14 version -- app/version.sh@14 -- # cut -f2 00:18:13.472 16:56:14 version -- app/version.sh@20 -- # suffix=-pre 00:18:13.472 16:56:14 version -- app/version.sh@22 -- # version=24.9 00:18:13.472 16:56:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:13.472 16:56:14 version -- app/version.sh@28 -- # version=24.9rc0 00:18:13.472 16:56:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:13.472 16:56:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:13.472 16:56:14 version -- app/version.sh@30 -- # py_version=24.9rc0 00:18:13.472 16:56:14 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:18:13.472 00:18:13.472 real 0m0.165s 00:18:13.472 user 0m0.087s 00:18:13.472 sys 0m0.111s 00:18:13.472 ************************************ 00:18:13.472 END TEST version 00:18:13.472 ************************************ 00:18:13.472 16:56:14 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.472 16:56:14 version -- common/autotest_common.sh@10 -- # set +x 00:18:13.473 16:56:14 -- common/autotest_common.sh@1142 -- # return 0 00:18:13.473 16:56:14 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:18:13.473 16:56:14 -- spdk/autotest.sh@198 -- # uname -s 00:18:13.473 16:56:14 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:18:13.473 16:56:14 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:18:13.473 16:56:14 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:18:13.473 16:56:14 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:18:13.473 16:56:14 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:13.473 16:56:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:13.473 16:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.473 16:56:14 -- common/autotest_common.sh@10 -- # set +x 00:18:13.473 ************************************ 00:18:13.473 START TEST spdk_dd 00:18:13.473 ************************************ 00:18:13.473 16:56:14 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:13.473 * Looking for test storage... 00:18:13.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:13.473 16:56:15 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.473 16:56:15 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.473 16:56:15 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.473 16:56:15 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.473 16:56:15 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 16:56:15 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 16:56:15 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 16:56:15 spdk_dd -- paths/export.sh@5 -- # export PATH 00:18:13.473 16:56:15 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 16:56:15 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:14.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.041 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.041 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.041 16:56:15 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:18:14.041 16:56:15 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:18:14.041 16:56:15 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:18:14.041 16:56:15 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@230 -- # local class 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@232 -- # local progif 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@233 -- # class=01 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@15 -- # local i 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@24 -- # return 0 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@15 -- # local i 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@24 -- # return 0 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:18:14.042 16:56:15 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:14.042 16:56:15 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@139 -- # local lib 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.042 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:18:14.043 * spdk_dd linked to liburing 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:14.043 16:56:15 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:14.043 16:56:15 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:18:14.044 16:56:15 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:18:14.044 16:56:15 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:18:14.044 16:56:15 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:18:14.044 16:56:15 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:18:14.044 16:56:15 spdk_dd -- dd/common.sh@153 -- # return 0 00:18:14.044 16:56:15 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:18:14.044 16:56:15 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:18:14.044 16:56:15 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:14.044 16:56:15 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.044 16:56:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:18:14.044 ************************************ 00:18:14.044 START TEST spdk_dd_basic_rw 00:18:14.044 ************************************ 00:18:14.044 16:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:18:14.302 * Looking for test storage... 00:18:14.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:18:14.302 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:18:14.562 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:18:14.562 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:14.563 ************************************ 00:18:14.563 START TEST dd_bs_lt_native_bs 00:18:14.563 ************************************ 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:14.563 16:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:14.563 { 00:18:14.563 "subsystems": [ 00:18:14.563 { 00:18:14.563 "subsystem": "bdev", 00:18:14.563 "config": [ 00:18:14.563 { 00:18:14.563 "params": { 00:18:14.563 "trtype": "pcie", 00:18:14.563 "traddr": "0000:00:10.0", 00:18:14.563 "name": "Nvme0" 00:18:14.563 }, 00:18:14.563 "method": "bdev_nvme_attach_controller" 00:18:14.563 }, 00:18:14.563 { 00:18:14.563 "method": "bdev_wait_for_examine" 00:18:14.563 } 00:18:14.563 ] 00:18:14.563 } 00:18:14.563 ] 00:18:14.563 } 00:18:14.563 [2024-07-22 16:56:16.114873] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:14.563 [2024-07-22 16:56:16.115045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64900 ] 00:18:14.821 [2024-07-22 16:56:16.287949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.080 [2024-07-22 16:56:16.658280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.339 [2024-07-22 16:56:16.947618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:15.597 [2024-07-22 16:56:17.171450] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:18:15.597 [2024-07-22 16:56:17.171549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:16.531 [2024-07-22 16:56:17.884857] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.097 00:18:17.097 real 0m2.453s 00:18:17.097 user 0m2.108s 00:18:17.097 sys 0m0.287s 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.097 ************************************ 00:18:17.097 END TEST dd_bs_lt_native_bs 00:18:17.097 ************************************ 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:18:17.097 ************************************ 00:18:17.097 START TEST dd_rw 00:18:17.097 ************************************ 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:18:17.097 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:18:17.098 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:18:17.098 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:17.098 16:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:17.665 16:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:18:17.665 16:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:17.665 16:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:17.665 16:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:17.665 { 00:18:17.665 "subsystems": [ 00:18:17.665 { 00:18:17.665 "subsystem": "bdev", 00:18:17.665 "config": [ 00:18:17.665 { 00:18:17.665 "params": { 00:18:17.665 "trtype": "pcie", 00:18:17.665 "traddr": "0000:00:10.0", 00:18:17.665 "name": "Nvme0" 00:18:17.665 }, 00:18:17.665 "method": "bdev_nvme_attach_controller" 00:18:17.665 }, 00:18:17.665 { 00:18:17.665 "method": "bdev_wait_for_examine" 00:18:17.665 } 00:18:17.665 ] 00:18:17.665 } 00:18:17.665 ] 00:18:17.665 } 00:18:17.665 [2024-07-22 16:56:19.204212] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:17.665 [2024-07-22 16:56:19.204365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64954 ] 00:18:17.923 [2024-07-22 16:56:19.373825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.180 [2024-07-22 16:56:19.715704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.456 [2024-07-22 16:56:20.049216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.615  Copying: 60/60 [kB] (average 29 MBps) 00:18:20.615 00:18:20.615 16:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:18:20.615 16:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:20.615 16:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:20.615 16:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:20.615 { 00:18:20.615 "subsystems": [ 00:18:20.615 { 00:18:20.615 "subsystem": "bdev", 00:18:20.615 "config": [ 00:18:20.615 { 00:18:20.615 "params": { 00:18:20.615 "trtype": "pcie", 00:18:20.615 "traddr": "0000:00:10.0", 00:18:20.615 "name": "Nvme0" 00:18:20.615 }, 00:18:20.615 "method": "bdev_nvme_attach_controller" 00:18:20.615 }, 00:18:20.615 { 00:18:20.615 "method": "bdev_wait_for_examine" 00:18:20.615 } 00:18:20.615 ] 00:18:20.615 } 00:18:20.615 ] 00:18:20.615 } 00:18:20.615 [2024-07-22 16:56:21.915098] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:20.615 [2024-07-22 16:56:21.915272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64991 ] 00:18:20.615 [2024-07-22 16:56:22.083129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.874 [2024-07-22 16:56:22.362292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.132 [2024-07-22 16:56:22.650318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:22.762  Copying: 60/60 [kB] (average 19 MBps) 00:18:22.762 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:22.762 16:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:22.762 { 00:18:22.762 "subsystems": [ 00:18:22.762 { 00:18:22.762 "subsystem": "bdev", 00:18:22.762 "config": [ 00:18:22.762 { 00:18:22.762 "params": { 00:18:22.762 "trtype": "pcie", 00:18:22.762 "traddr": "0000:00:10.0", 00:18:22.762 "name": "Nvme0" 00:18:22.762 }, 00:18:22.762 "method": "bdev_nvme_attach_controller" 00:18:22.762 }, 00:18:22.762 { 00:18:22.762 "method": "bdev_wait_for_examine" 00:18:22.762 } 00:18:22.762 ] 00:18:22.762 } 00:18:22.762 ] 00:18:22.762 } 00:18:22.762 [2024-07-22 16:56:24.255195] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:22.762 [2024-07-22 16:56:24.255426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65025 ] 00:18:23.020 [2024-07-22 16:56:24.431970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.278 [2024-07-22 16:56:24.709266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.535 [2024-07-22 16:56:24.993374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:25.166  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:25.166 00:18:25.166 16:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:25.166 16:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:18:25.166 16:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:18:25.166 16:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:18:25.166 16:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:18:25.166 16:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:25.166 16:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:26.549 16:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:18:26.549 16:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:26.549 16:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:26.549 16:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:26.549 { 00:18:26.549 "subsystems": [ 00:18:26.549 { 00:18:26.549 "subsystem": "bdev", 00:18:26.549 "config": [ 00:18:26.549 { 00:18:26.549 "params": { 00:18:26.549 "trtype": "pcie", 00:18:26.549 "traddr": "0000:00:10.0", 00:18:26.549 "name": "Nvme0" 00:18:26.549 }, 00:18:26.549 "method": "bdev_nvme_attach_controller" 00:18:26.549 }, 00:18:26.549 { 00:18:26.549 "method": "bdev_wait_for_examine" 00:18:26.550 } 00:18:26.550 ] 00:18:26.550 } 00:18:26.550 ] 00:18:26.550 } 00:18:26.550 [2024-07-22 16:56:27.893489] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:26.550 [2024-07-22 16:56:27.893642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65077 ] 00:18:26.550 [2024-07-22 16:56:28.070106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.808 [2024-07-22 16:56:28.337406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.065 [2024-07-22 16:56:28.620519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:28.694  Copying: 60/60 [kB] (average 58 MBps) 00:18:28.694 00:18:28.694 16:56:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:18:28.694 16:56:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:28.694 16:56:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:28.694 16:56:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:28.694 { 00:18:28.694 "subsystems": [ 00:18:28.694 { 00:18:28.694 "subsystem": "bdev", 00:18:28.694 "config": [ 00:18:28.694 { 00:18:28.694 "params": { 00:18:28.694 "trtype": "pcie", 00:18:28.694 "traddr": "0000:00:10.0", 00:18:28.694 "name": "Nvme0" 00:18:28.694 }, 00:18:28.694 "method": "bdev_nvme_attach_controller" 00:18:28.694 }, 00:18:28.694 { 00:18:28.694 "method": "bdev_wait_for_examine" 00:18:28.694 } 00:18:28.694 ] 00:18:28.694 } 00:18:28.694 ] 00:18:28.694 } 00:18:28.694 [2024-07-22 16:56:30.246661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:28.694 [2024-07-22 16:56:30.246883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65108 ] 00:18:28.952 [2024-07-22 16:56:30.437574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.210 [2024-07-22 16:56:30.738847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.467 [2024-07-22 16:56:31.032741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:31.625  Copying: 60/60 [kB] (average 58 MBps) 00:18:31.625 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:31.625 16:56:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:31.625 { 00:18:31.625 "subsystems": [ 00:18:31.625 { 00:18:31.625 "subsystem": "bdev", 00:18:31.625 "config": [ 00:18:31.625 { 00:18:31.625 "params": { 00:18:31.625 "trtype": "pcie", 00:18:31.625 "traddr": "0000:00:10.0", 00:18:31.625 "name": "Nvme0" 00:18:31.625 }, 00:18:31.625 "method": "bdev_nvme_attach_controller" 00:18:31.625 }, 00:18:31.625 { 00:18:31.625 "method": "bdev_wait_for_examine" 00:18:31.625 } 00:18:31.625 ] 00:18:31.625 } 00:18:31.625 ] 00:18:31.625 } 00:18:31.625 [2024-07-22 16:56:33.016833] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:31.625 [2024-07-22 16:56:33.017037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65147 ] 00:18:31.625 [2024-07-22 16:56:33.192295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.191 [2024-07-22 16:56:33.513300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.191 [2024-07-22 16:56:33.803034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:33.821  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:33.821 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:33.821 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:34.386 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:18:34.386 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:34.386 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:34.386 16:56:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:34.386 { 00:18:34.386 "subsystems": [ 00:18:34.386 { 00:18:34.386 "subsystem": "bdev", 00:18:34.386 "config": [ 00:18:34.386 { 00:18:34.386 "params": { 00:18:34.386 "trtype": "pcie", 00:18:34.386 "traddr": "0000:00:10.0", 00:18:34.386 "name": "Nvme0" 00:18:34.386 }, 00:18:34.386 "method": "bdev_nvme_attach_controller" 00:18:34.386 }, 00:18:34.386 { 00:18:34.386 "method": "bdev_wait_for_examine" 00:18:34.386 } 00:18:34.386 ] 00:18:34.386 } 00:18:34.386 ] 00:18:34.386 } 00:18:34.644 [2024-07-22 16:56:36.076529] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:34.644 [2024-07-22 16:56:36.076702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65189 ] 00:18:34.901 [2024-07-22 16:56:36.286714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.159 [2024-07-22 16:56:36.565779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.417 [2024-07-22 16:56:36.851289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:37.053  Copying: 56/56 [kB] (average 54 MBps) 00:18:37.053 00:18:37.053 16:56:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:37.053 16:56:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:18:37.053 16:56:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:37.053 16:56:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:37.053 { 00:18:37.053 "subsystems": [ 00:18:37.053 { 00:18:37.053 "subsystem": "bdev", 00:18:37.053 "config": [ 00:18:37.053 { 00:18:37.053 "params": { 00:18:37.053 "trtype": "pcie", 00:18:37.053 "traddr": "0000:00:10.0", 00:18:37.053 "name": "Nvme0" 00:18:37.053 }, 00:18:37.053 "method": "bdev_nvme_attach_controller" 00:18:37.053 }, 00:18:37.053 { 00:18:37.053 "method": "bdev_wait_for_examine" 00:18:37.053 } 00:18:37.053 ] 00:18:37.053 } 00:18:37.053 ] 00:18:37.053 } 00:18:37.310 [2024-07-22 16:56:38.734141] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:37.310 [2024-07-22 16:56:38.734315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65225 ] 00:18:37.310 [2024-07-22 16:56:38.901156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.566 [2024-07-22 16:56:39.170376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.132 [2024-07-22 16:56:39.458496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:39.505  Copying: 56/56 [kB] (average 54 MBps) 00:18:39.505 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:39.505 16:56:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:39.505 { 00:18:39.505 "subsystems": [ 00:18:39.505 { 00:18:39.505 "subsystem": "bdev", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "params": { 00:18:39.505 "trtype": "pcie", 00:18:39.505 "traddr": "0000:00:10.0", 00:18:39.505 "name": "Nvme0" 00:18:39.505 }, 00:18:39.505 "method": "bdev_nvme_attach_controller" 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "bdev_wait_for_examine" 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 } 00:18:39.505 [2024-07-22 16:56:41.062887] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:39.505 [2024-07-22 16:56:41.063065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65264 ] 00:18:39.764 [2024-07-22 16:56:41.248858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.022 [2024-07-22 16:56:41.535161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.278 [2024-07-22 16:56:41.857087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:42.424  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:42.424 00:18:42.424 16:56:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:42.424 16:56:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:18:42.424 16:56:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:18:42.424 16:56:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:18:42.424 16:56:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:18:42.424 16:56:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:42.424 16:56:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:42.682 16:56:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:18:42.682 16:56:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:42.682 16:56:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:42.682 16:56:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:42.682 { 00:18:42.682 "subsystems": [ 00:18:42.682 { 00:18:42.682 "subsystem": "bdev", 00:18:42.682 "config": [ 00:18:42.682 { 00:18:42.682 "params": { 00:18:42.682 "trtype": "pcie", 00:18:42.682 "traddr": "0000:00:10.0", 00:18:42.682 "name": "Nvme0" 00:18:42.682 }, 00:18:42.682 "method": "bdev_nvme_attach_controller" 00:18:42.682 }, 00:18:42.682 { 00:18:42.682 "method": "bdev_wait_for_examine" 00:18:42.682 } 00:18:42.682 ] 00:18:42.682 } 00:18:42.682 ] 00:18:42.682 } 00:18:42.938 [2024-07-22 16:56:44.323343] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:42.938 [2024-07-22 16:56:44.323487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65306 ] 00:18:42.938 [2024-07-22 16:56:44.499186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.502 [2024-07-22 16:56:44.838873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.759 [2024-07-22 16:56:45.143525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:45.130  Copying: 56/56 [kB] (average 54 MBps) 00:18:45.130 00:18:45.130 16:56:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:18:45.130 16:56:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:45.130 16:56:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:45.130 16:56:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:45.130 { 00:18:45.130 "subsystems": [ 00:18:45.130 { 00:18:45.130 "subsystem": "bdev", 00:18:45.130 "config": [ 00:18:45.130 { 00:18:45.130 "params": { 00:18:45.130 "trtype": "pcie", 00:18:45.130 "traddr": "0000:00:10.0", 00:18:45.130 "name": "Nvme0" 00:18:45.130 }, 00:18:45.130 "method": "bdev_nvme_attach_controller" 00:18:45.130 }, 00:18:45.130 { 00:18:45.130 "method": "bdev_wait_for_examine" 00:18:45.130 } 00:18:45.130 ] 00:18:45.130 } 00:18:45.130 ] 00:18:45.130 } 00:18:45.130 [2024-07-22 16:56:46.733671] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:45.130 [2024-07-22 16:56:46.733843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65337 ] 00:18:45.389 [2024-07-22 16:56:46.914553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.648 [2024-07-22 16:56:47.229343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.214 [2024-07-22 16:56:47.556770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:48.114  Copying: 56/56 [kB] (average 54 MBps) 00:18:48.114 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:48.114 16:56:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:48.114 { 00:18:48.114 "subsystems": [ 00:18:48.114 { 00:18:48.114 "subsystem": "bdev", 00:18:48.114 "config": [ 00:18:48.114 { 00:18:48.114 "params": { 00:18:48.114 "trtype": "pcie", 00:18:48.114 "traddr": "0000:00:10.0", 00:18:48.114 "name": "Nvme0" 00:18:48.114 }, 00:18:48.114 "method": "bdev_nvme_attach_controller" 00:18:48.114 }, 00:18:48.114 { 00:18:48.114 "method": "bdev_wait_for_examine" 00:18:48.114 } 00:18:48.114 ] 00:18:48.114 } 00:18:48.114 ] 00:18:48.114 } 00:18:48.114 [2024-07-22 16:56:49.430085] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:48.114 [2024-07-22 16:56:49.430297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65376 ] 00:18:48.114 [2024-07-22 16:56:49.619154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.373 [2024-07-22 16:56:49.891841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.630 [2024-07-22 16:56:50.172082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:50.262  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:50.262 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:50.262 16:56:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:50.520 16:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:18:50.520 16:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:50.520 16:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:50.520 16:56:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:50.778 { 00:18:50.778 "subsystems": [ 00:18:50.778 { 00:18:50.778 "subsystem": "bdev", 00:18:50.778 "config": [ 00:18:50.778 { 00:18:50.778 "params": { 00:18:50.778 "trtype": "pcie", 00:18:50.778 "traddr": "0000:00:10.0", 00:18:50.778 "name": "Nvme0" 00:18:50.778 }, 00:18:50.778 "method": "bdev_nvme_attach_controller" 00:18:50.778 }, 00:18:50.778 { 00:18:50.778 "method": "bdev_wait_for_examine" 00:18:50.778 } 00:18:50.778 ] 00:18:50.778 } 00:18:50.778 ] 00:18:50.778 } 00:18:50.778 [2024-07-22 16:56:52.243004] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:50.778 [2024-07-22 16:56:52.243148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65418 ] 00:18:51.035 [2024-07-22 16:56:52.411889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.293 [2024-07-22 16:56:52.678782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.551 [2024-07-22 16:56:52.964912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:53.271  Copying: 48/48 [kB] (average 46 MBps) 00:18:53.271 00:18:53.271 16:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:18:53.271 16:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:18:53.271 16:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:53.271 16:56:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:53.271 { 00:18:53.271 "subsystems": [ 00:18:53.271 { 00:18:53.271 "subsystem": "bdev", 00:18:53.271 "config": [ 00:18:53.271 { 00:18:53.271 "params": { 00:18:53.271 "trtype": "pcie", 00:18:53.271 "traddr": "0000:00:10.0", 00:18:53.271 "name": "Nvme0" 00:18:53.271 }, 00:18:53.271 "method": "bdev_nvme_attach_controller" 00:18:53.271 }, 00:18:53.271 { 00:18:53.271 "method": "bdev_wait_for_examine" 00:18:53.271 } 00:18:53.271 ] 00:18:53.271 } 00:18:53.271 ] 00:18:53.271 } 00:18:53.271 [2024-07-22 16:56:54.794292] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:53.271 [2024-07-22 16:56:54.794433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65449 ] 00:18:53.530 [2024-07-22 16:56:54.963749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.788 [2024-07-22 16:56:55.215873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.047 [2024-07-22 16:56:55.476616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:55.678  Copying: 48/48 [kB] (average 46 MBps) 00:18:55.678 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:55.678 16:56:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:55.678 { 00:18:55.678 "subsystems": [ 00:18:55.678 { 00:18:55.678 "subsystem": "bdev", 00:18:55.678 "config": [ 00:18:55.678 { 00:18:55.678 "params": { 00:18:55.678 "trtype": "pcie", 00:18:55.678 "traddr": "0000:00:10.0", 00:18:55.678 "name": "Nvme0" 00:18:55.678 }, 00:18:55.678 "method": "bdev_nvme_attach_controller" 00:18:55.678 }, 00:18:55.678 { 00:18:55.678 "method": "bdev_wait_for_examine" 00:18:55.678 } 00:18:55.678 ] 00:18:55.678 } 00:18:55.678 ] 00:18:55.678 } 00:18:55.678 [2024-07-22 16:56:57.057873] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:55.678 [2024-07-22 16:56:57.058056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65482 ] 00:18:55.678 [2024-07-22 16:56:57.246022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.244 [2024-07-22 16:56:57.567422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.244 [2024-07-22 16:56:57.845136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:58.462  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:58.462 00:18:58.462 16:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:58.462 16:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:18:58.462 16:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:18:58.462 16:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:18:58.462 16:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:18:58.462 16:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:18:58.462 16:56:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:58.720 16:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:18:58.720 16:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:18:58.720 16:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:18:58.720 16:57:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:18:58.720 { 00:18:58.720 "subsystems": [ 00:18:58.720 { 00:18:58.720 "subsystem": "bdev", 00:18:58.720 "config": [ 00:18:58.720 { 00:18:58.720 "params": { 00:18:58.720 "trtype": "pcie", 00:18:58.720 "traddr": "0000:00:10.0", 00:18:58.720 "name": "Nvme0" 00:18:58.720 }, 00:18:58.720 "method": "bdev_nvme_attach_controller" 00:18:58.720 }, 00:18:58.720 { 00:18:58.720 "method": "bdev_wait_for_examine" 00:18:58.720 } 00:18:58.720 ] 00:18:58.720 } 00:18:58.720 ] 00:18:58.720 } 00:18:58.720 [2024-07-22 16:57:00.263796] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:58.720 [2024-07-22 16:57:00.263956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65524 ] 00:18:58.978 [2024-07-22 16:57:00.434801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.236 [2024-07-22 16:57:00.726119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.494 [2024-07-22 16:57:01.020471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:01.129  Copying: 48/48 [kB] (average 46 MBps) 00:19:01.129 00:19:01.129 16:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:19:01.129 16:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:19:01.129 16:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:19:01.129 16:57:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:19:01.129 { 00:19:01.129 "subsystems": [ 00:19:01.129 { 00:19:01.129 "subsystem": "bdev", 00:19:01.129 "config": [ 00:19:01.129 { 00:19:01.129 "params": { 00:19:01.129 "trtype": "pcie", 00:19:01.129 "traddr": "0000:00:10.0", 00:19:01.129 "name": "Nvme0" 00:19:01.129 }, 00:19:01.129 "method": "bdev_nvme_attach_controller" 00:19:01.129 }, 00:19:01.129 { 00:19:01.129 "method": "bdev_wait_for_examine" 00:19:01.129 } 00:19:01.129 ] 00:19:01.129 } 00:19:01.129 ] 00:19:01.129 } 00:19:01.129 [2024-07-22 16:57:02.581479] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:01.129 [2024-07-22 16:57:02.581622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65561 ] 00:19:01.417 [2024-07-22 16:57:02.753384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.417 [2024-07-22 16:57:03.020516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.983 [2024-07-22 16:57:03.301010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:03.881  Copying: 48/48 [kB] (average 46 MBps) 00:19:03.881 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:19:03.881 16:57:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:19:03.881 { 00:19:03.881 "subsystems": [ 00:19:03.881 { 00:19:03.881 "subsystem": "bdev", 00:19:03.881 "config": [ 00:19:03.881 { 00:19:03.881 "params": { 00:19:03.881 "trtype": "pcie", 00:19:03.881 "traddr": "0000:00:10.0", 00:19:03.881 "name": "Nvme0" 00:19:03.881 }, 00:19:03.881 "method": "bdev_nvme_attach_controller" 00:19:03.881 }, 00:19:03.881 { 00:19:03.881 "method": "bdev_wait_for_examine" 00:19:03.881 } 00:19:03.881 ] 00:19:03.881 } 00:19:03.881 ] 00:19:03.881 } 00:19:03.881 [2024-07-22 16:57:05.153616] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:03.881 [2024-07-22 16:57:05.153769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65599 ] 00:19:03.881 [2024-07-22 16:57:05.324623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.139 [2024-07-22 16:57:05.589474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.397 [2024-07-22 16:57:05.865893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:06.029  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:06.029 00:19:06.029 00:19:06.029 real 0m48.842s 00:19:06.029 user 0m42.067s 00:19:06.029 sys 0m20.687s 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.029 ************************************ 00:19:06.029 END TEST dd_rw 00:19:06.029 ************************************ 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:19:06.029 ************************************ 00:19:06.029 START TEST dd_rw_offset 00:19:06.029 ************************************ 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ms3vef3bd7vrbmfw2a4rjrx22jjbjo5urs93o64s342r77mmm49mve0bwaaayeus7foho5tdi1wbryzks6gln9j7tif03f95z8li3kt8zzj8ysqdtoxrt0jod6tlcf6fdar46klogcjz51iv122hwwxvuku85choq6kp5vkk7kdl1jotzp23wasizndt5aa02z8m5q17go3qlcvjda8qs8t9prnss6efu28hu6lqjh1yoz1x4zcvkvzbhs2hv3kczr3dxbp7en2wafm6pfy6zsriep1w015sg7tb6rj6zsbb278ayxn2fpz7puhcmwfwubfbjd9yiqhd9yqwg0sfzbwsugb8x0t6jofexss6gbib6uqw4216ptdxogkkm41zdxz0ockj5p225pn94irnsu5s37re580l3sw0fpw9qk4mpvq3c5m0oymm5jyeq2y500z52i3ay9mh8ry12dxcbva0y4rf3i4x0t8w26bhhtzxb5p62vtt8cxzp0vvdk20izld983btwxnl7i1l8aztbu1fp1ykmlykmf2wdyh122bqy3br407qopkwvhp37th2b7qoks1b5g79sfqngldcn4fim9zqakjbqeenpg2a6f46tfwr7ufe35kjc5mzeuba07sbhw0258nz6av9u8kmwr55b146tk1yseutatk3j4egf256bvdfp8i6scl2op8gudews1fbhqofc9pg55avrfj2bqrrbq8vr4onjczkhfbe3ux601vona1k3vnp5pxplaoi2u6d3g6xri983i5f2qa67q728kme4bf6irl7zcddc7zizg3aalxkm54u37zn2poz3kzlolisge4bmlyqem539u39mcpkww7if5l0pup8w4l0dx2ig47jil72un4xsqlujrknx9io7xr1zvfaat4z2vyqvdilywdh5af06s15uqo9nzvojmv669mzcni61vf74ymq1uigq6tomm1gemh08bib7kdydfg0li5d7ay39dhqdopm53cfmlyxzryuwo34wipq13teyspxvm037b1ku3dwqyrjnrin3zm38kv0tb40rofz4gsn7cc6zak3gs2y6uipijwzyiv6sv1cknbksp5tm56y2fyashxyy92iaoivi3bt2pe5w2ttioxq1u1pkkc1zhfiwdclned4o521lwpw03s47yg3m79o1exvdb33aj5k0a8rbw9pbq8li7gcamuy672yhi94rwrn84hfysuami2bakyv4gdvfdv53iegp4diai5599ycxu9wxa9dlinhvnlo9rkxmfs6qe31tvby2tv2a12gr09drrpsctxvsdjjeddevzc6wntlwijwlc6duwb5f0fkeh53lgxr9blucm0l03p1mtn7dcqzszu6drz69eexeb3netrppv5z549la9patldhfayiysuk2734nssekp4v0fpl3ssao7g3p3sapavbxg6gblzto9qrcesfhbau68qfmo58r82xpn5v2t1uj82qrf6xxw5xm3togtu69836f32m4naprpay0pgx3rj2z9zdd3f9kuuizymm1xdkgo4e5onyi73vn03kn0812hcikmvtyuee7p5dfd0yzlcxuv6d53aiaghrszhkp0zykf4l28b1ka31aj58f003sbilbflpbybe04nv977jtl5huk4quxgns7kjjycrls0ql7zw2ifn2zqp40q4i9jh3j9e4d57ukv3ps175pfi8zn689u6tswoc4k58mo6iwspoixscusyymsk50wg5c7ph9wdm43ujxavt35kia027fewvesujv22u690ewng5ad951vqjyc0odyfrdcqki1x1g8cmnk08jdkmtxn017b4qjg1zko4panjtwvbaai0jh7wwsjynys9zftsaptl1ojgc3ow7zn1fztq80lm4wyphhiv7krf78vwv0gglawm8npdg50n4pqrlcxqzecakron1v5zesa1x7yj3ixuvrtc3f0drsnfpcuskd2umh3m0trqdnj5npk95fdjpt9eavmkajrpuwpwzk9839dwoz8dh5dihb1joinvq186o689waivlb6ljbj35xhtsvau48hg59ppdudlxuakypwhwrpd0hz8dysr5zox6x6illo2igrektm0vcqm7gq3m39uerm556tya40lg5icycfqqjs5534r3k5zw8aeo7ytuolx2kjib04jpvyw0pgs4hrjjt6wds1fnv3vwkboc0moc1iptst0me5ledtpfayj8kb52o3s4uufq579oc213xhawbn1nd66yjcso2wsyfuzyvfak7er5ljkc3ouhpt52vogavv69pd3n61yeif6fkfxnr4c9mg7r8j6y02js5h2rwqoeraz7lf1i13iqca1llo0uhmu3r9zw1w61qcd71ipc50w7ohqu4k4imqziynpr7behfq3o8bwocxfu0c4n7givhqzmxiw175c7xgq5n3nhx2h6sj79uy9xmot8x0ph5e5qa2mzlrqp8du2ec7m6y4cfuv27a9izpus7njm24bpgfbiklm3p8h78o8yh2wux2n7oq6pf0em3lj44s3blvj2pjy35rbyvnu0v39dexpcolj3ic1jyjlcen2pnds3fibh9p1d00h1og65llc0zxxczc8oc51yggrc3visvik3bavhbm2mugpzko3id0dkhkzgkc6sd8rp2fvo338p5lhs6o14qg646le8w69934psjybr29vw66oebr8b69fklkqpc4uykf57rf5uynglkng7jel8x6q0o6ho4jrqcgv4kn22oldbzepb3pkmo8oxplpsi3kpmeb103u68msf0pc0l2e31d9l4cdtavoytgjxi14s7bmymxmkegnxjvvp8ial4mz6gdgq36wsewbulpiu3o4k7bqj7075gtgcno2q8ldebxie5tfaeq4r84ho02fgtua0a100gbit02o0v7mxfauann0b7mluphti01fbnyvfoqmgo342dz38qw3nxkfv9q3b1jco8qjxem3277y29k1yv8yoy4fcnyn9ne5tw2sjhutml6xz7griv43skbdzsg60fnwhb4tgjnvtefi7jnkrgoore66ggciup84waipm4n3yawbgemo5vr7cy4yzmm1nkjt4jirbaf8xkd48gubg0vfxmsbq8nndse8bc92lohka3ja8py935ydc0vih9ohnj0cm1c6qhrzzl9l0i1zhhl343bzb87sb8zm3slyk3ovi0ele0iaz2z247fujm78w34c96vr9qlypi4ofwbz1b2uptpvgwhie905h7bjumobcqhjyq8p8gi6j0m47ibfpslwhxpa6sjsrc2b44n61dqaa88pic8x5kyiallpa8xsk9bgb2nadk40jx250gvh7ebijv9hke51bxum6yy5h8iczqgfna549ltp6c5euuk9xwnhdgrss2wiz54uzxunutonnz8c4nijsl4qyqig4iqwto6d4459hb12b6kvu1msv9dukelcj8mg66yz920luldtk6ltmd6xcog8cpr64fafa39tbcjg6wcg188f5xhqwxvgofd22uiaimqjh0e31hsv6r9xsjnirbua0hiz6jswzeqmp29efpm78iev6e4ojcde0vhp5mow64hvv3krx133h78j36rknr46h26sg8b5omu6gpjml4demgj3mvj7h68msqtkoe5n77rlet9bbky0mkrg2qolhh528kdcuk4p51374t71jj7wghv6nrxmai5hs9lactzl0cxmwm8pakgt210t4rcb5nlm5uchdx5uec6aaaujahss71hb38odsup0bb4pr8ehdk06c752vz0zfofofbq8yn5g79q0p2zp8wudcn2nutb7j2j1dpokbn6wwyruinkt7ne1rgoqjmsvzmfza78immrszpubezajpypst5ng8fqptfm8h0mgrmuw7s762v8wy8kgyhmzdum9qsvrxe3rj8oom6byldvrfhpywbf9ciixulbtvl9g275i32focvi1npai3acpoqdrc3cr7usvqplrmhjpctrcndvmhy53oz0tj9fw8ti2u1kmu1 00:19:06.029 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:19:06.030 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:19:06.030 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:19:06.030 16:57:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:19:06.030 { 00:19:06.030 "subsystems": [ 00:19:06.030 { 00:19:06.030 "subsystem": "bdev", 00:19:06.030 "config": [ 00:19:06.030 { 00:19:06.030 "params": { 00:19:06.030 "trtype": "pcie", 00:19:06.030 "traddr": "0000:00:10.0", 00:19:06.030 "name": "Nvme0" 00:19:06.030 }, 00:19:06.030 "method": "bdev_nvme_attach_controller" 00:19:06.030 }, 00:19:06.030 { 00:19:06.030 "method": "bdev_wait_for_examine" 00:19:06.030 } 00:19:06.030 ] 00:19:06.030 } 00:19:06.030 ] 00:19:06.030 } 00:19:06.030 [2024-07-22 16:57:07.594596] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:06.030 [2024-07-22 16:57:07.594763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65654 ] 00:19:06.287 [2024-07-22 16:57:07.767707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.545 [2024-07-22 16:57:08.117475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.801 [2024-07-22 16:57:08.405591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:08.958  Copying: 4096/4096 [B] (average 4000 kBps) 00:19:08.958 00:19:08.958 16:57:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:19:08.958 16:57:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:19:08.958 16:57:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:19:08.958 16:57:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:19:08.958 { 00:19:08.958 "subsystems": [ 00:19:08.958 { 00:19:08.958 "subsystem": "bdev", 00:19:08.958 "config": [ 00:19:08.958 { 00:19:08.958 "params": { 00:19:08.958 "trtype": "pcie", 00:19:08.958 "traddr": "0000:00:10.0", 00:19:08.958 "name": "Nvme0" 00:19:08.958 }, 00:19:08.958 "method": "bdev_nvme_attach_controller" 00:19:08.958 }, 00:19:08.958 { 00:19:08.958 "method": "bdev_wait_for_examine" 00:19:08.958 } 00:19:08.958 ] 00:19:08.958 } 00:19:08.958 ] 00:19:08.958 } 00:19:08.958 [2024-07-22 16:57:10.302489] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:08.958 [2024-07-22 16:57:10.302653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65685 ] 00:19:08.958 [2024-07-22 16:57:10.480093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.217 [2024-07-22 16:57:10.810142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.783 [2024-07-22 16:57:11.096482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:11.161  Copying: 4096/4096 [B] (average 4000 kBps) 00:19:11.161 00:19:11.161 16:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ms3vef3bd7vrbmfw2a4rjrx22jjbjo5urs93o64s342r77mmm49mve0bwaaayeus7foho5tdi1wbryzks6gln9j7tif03f95z8li3kt8zzj8ysqdtoxrt0jod6tlcf6fdar46klogcjz51iv122hwwxvuku85choq6kp5vkk7kdl1jotzp23wasizndt5aa02z8m5q17go3qlcvjda8qs8t9prnss6efu28hu6lqjh1yoz1x4zcvkvzbhs2hv3kczr3dxbp7en2wafm6pfy6zsriep1w015sg7tb6rj6zsbb278ayxn2fpz7puhcmwfwubfbjd9yiqhd9yqwg0sfzbwsugb8x0t6jofexss6gbib6uqw4216ptdxogkkm41zdxz0ockj5p225pn94irnsu5s37re580l3sw0fpw9qk4mpvq3c5m0oymm5jyeq2y500z52i3ay9mh8ry12dxcbva0y4rf3i4x0t8w26bhhtzxb5p62vtt8cxzp0vvdk20izld983btwxnl7i1l8aztbu1fp1ykmlykmf2wdyh122bqy3br407qopkwvhp37th2b7qoks1b5g79sfqngldcn4fim9zqakjbqeenpg2a6f46tfwr7ufe35kjc5mzeuba07sbhw0258nz6av9u8kmwr55b146tk1yseutatk3j4egf256bvdfp8i6scl2op8gudews1fbhqofc9pg55avrfj2bqrrbq8vr4onjczkhfbe3ux601vona1k3vnp5pxplaoi2u6d3g6xri983i5f2qa67q728kme4bf6irl7zcddc7zizg3aalxkm54u37zn2poz3kzlolisge4bmlyqem539u39mcpkww7if5l0pup8w4l0dx2ig47jil72un4xsqlujrknx9io7xr1zvfaat4z2vyqvdilywdh5af06s15uqo9nzvojmv669mzcni61vf74ymq1uigq6tomm1gemh08bib7kdydfg0li5d7ay39dhqdopm53cfmlyxzryuwo34wipq13teyspxvm037b1ku3dwqyrjnrin3zm38kv0tb40rofz4gsn7cc6zak3gs2y6uipijwzyiv6sv1cknbksp5tm56y2fyashxyy92iaoivi3bt2pe5w2ttioxq1u1pkkc1zhfiwdclned4o521lwpw03s47yg3m79o1exvdb33aj5k0a8rbw9pbq8li7gcamuy672yhi94rwrn84hfysuami2bakyv4gdvfdv53iegp4diai5599ycxu9wxa9dlinhvnlo9rkxmfs6qe31tvby2tv2a12gr09drrpsctxvsdjjeddevzc6wntlwijwlc6duwb5f0fkeh53lgxr9blucm0l03p1mtn7dcqzszu6drz69eexeb3netrppv5z549la9patldhfayiysuk2734nssekp4v0fpl3ssao7g3p3sapavbxg6gblzto9qrcesfhbau68qfmo58r82xpn5v2t1uj82qrf6xxw5xm3togtu69836f32m4naprpay0pgx3rj2z9zdd3f9kuuizymm1xdkgo4e5onyi73vn03kn0812hcikmvtyuee7p5dfd0yzlcxuv6d53aiaghrszhkp0zykf4l28b1ka31aj58f003sbilbflpbybe04nv977jtl5huk4quxgns7kjjycrls0ql7zw2ifn2zqp40q4i9jh3j9e4d57ukv3ps175pfi8zn689u6tswoc4k58mo6iwspoixscusyymsk50wg5c7ph9wdm43ujxavt35kia027fewvesujv22u690ewng5ad951vqjyc0odyfrdcqki1x1g8cmnk08jdkmtxn017b4qjg1zko4panjtwvbaai0jh7wwsjynys9zftsaptl1ojgc3ow7zn1fztq80lm4wyphhiv7krf78vwv0gglawm8npdg50n4pqrlcxqzecakron1v5zesa1x7yj3ixuvrtc3f0drsnfpcuskd2umh3m0trqdnj5npk95fdjpt9eavmkajrpuwpwzk9839dwoz8dh5dihb1joinvq186o689waivlb6ljbj35xhtsvau48hg59ppdudlxuakypwhwrpd0hz8dysr5zox6x6illo2igrektm0vcqm7gq3m39uerm556tya40lg5icycfqqjs5534r3k5zw8aeo7ytuolx2kjib04jpvyw0pgs4hrjjt6wds1fnv3vwkboc0moc1iptst0me5ledtpfayj8kb52o3s4uufq579oc213xhawbn1nd66yjcso2wsyfuzyvfak7er5ljkc3ouhpt52vogavv69pd3n61yeif6fkfxnr4c9mg7r8j6y02js5h2rwqoeraz7lf1i13iqca1llo0uhmu3r9zw1w61qcd71ipc50w7ohqu4k4imqziynpr7behfq3o8bwocxfu0c4n7givhqzmxiw175c7xgq5n3nhx2h6sj79uy9xmot8x0ph5e5qa2mzlrqp8du2ec7m6y4cfuv27a9izpus7njm24bpgfbiklm3p8h78o8yh2wux2n7oq6pf0em3lj44s3blvj2pjy35rbyvnu0v39dexpcolj3ic1jyjlcen2pnds3fibh9p1d00h1og65llc0zxxczc8oc51yggrc3visvik3bavhbm2mugpzko3id0dkhkzgkc6sd8rp2fvo338p5lhs6o14qg646le8w69934psjybr29vw66oebr8b69fklkqpc4uykf57rf5uynglkng7jel8x6q0o6ho4jrqcgv4kn22oldbzepb3pkmo8oxplpsi3kpmeb103u68msf0pc0l2e31d9l4cdtavoytgjxi14s7bmymxmkegnxjvvp8ial4mz6gdgq36wsewbulpiu3o4k7bqj7075gtgcno2q8ldebxie5tfaeq4r84ho02fgtua0a100gbit02o0v7mxfauann0b7mluphti01fbnyvfoqmgo342dz38qw3nxkfv9q3b1jco8qjxem3277y29k1yv8yoy4fcnyn9ne5tw2sjhutml6xz7griv43skbdzsg60fnwhb4tgjnvtefi7jnkrgoore66ggciup84waipm4n3yawbgemo5vr7cy4yzmm1nkjt4jirbaf8xkd48gubg0vfxmsbq8nndse8bc92lohka3ja8py935ydc0vih9ohnj0cm1c6qhrzzl9l0i1zhhl343bzb87sb8zm3slyk3ovi0ele0iaz2z247fujm78w34c96vr9qlypi4ofwbz1b2uptpvgwhie905h7bjumobcqhjyq8p8gi6j0m47ibfpslwhxpa6sjsrc2b44n61dqaa88pic8x5kyiallpa8xsk9bgb2nadk40jx250gvh7ebijv9hke51bxum6yy5h8iczqgfna549ltp6c5euuk9xwnhdgrss2wiz54uzxunutonnz8c4nijsl4qyqig4iqwto6d4459hb12b6kvu1msv9dukelcj8mg66yz920luldtk6ltmd6xcog8cpr64fafa39tbcjg6wcg188f5xhqwxvgofd22uiaimqjh0e31hsv6r9xsjnirbua0hiz6jswzeqmp29efpm78iev6e4ojcde0vhp5mow64hvv3krx133h78j36rknr46h26sg8b5omu6gpjml4demgj3mvj7h68msqtkoe5n77rlet9bbky0mkrg2qolhh528kdcuk4p51374t71jj7wghv6nrxmai5hs9lactzl0cxmwm8pakgt210t4rcb5nlm5uchdx5uec6aaaujahss71hb38odsup0bb4pr8ehdk06c752vz0zfofofbq8yn5g79q0p2zp8wudcn2nutb7j2j1dpokbn6wwyruinkt7ne1rgoqjmsvzmfza78immrszpubezajpypst5ng8fqptfm8h0mgrmuw7s762v8wy8kgyhmzdum9qsvrxe3rj8oom6byldvrfhpywbf9ciixulbtvl9g275i32focvi1npai3acpoqdrc3cr7usvqplrmhjpctrcndvmhy53oz0tj9fw8ti2u1kmu1 == \m\s\3\v\e\f\3\b\d\7\v\r\b\m\f\w\2\a\4\r\j\r\x\2\2\j\j\b\j\o\5\u\r\s\9\3\o\6\4\s\3\4\2\r\7\7\m\m\m\4\9\m\v\e\0\b\w\a\a\a\y\e\u\s\7\f\o\h\o\5\t\d\i\1\w\b\r\y\z\k\s\6\g\l\n\9\j\7\t\i\f\0\3\f\9\5\z\8\l\i\3\k\t\8\z\z\j\8\y\s\q\d\t\o\x\r\t\0\j\o\d\6\t\l\c\f\6\f\d\a\r\4\6\k\l\o\g\c\j\z\5\1\i\v\1\2\2\h\w\w\x\v\u\k\u\8\5\c\h\o\q\6\k\p\5\v\k\k\7\k\d\l\1\j\o\t\z\p\2\3\w\a\s\i\z\n\d\t\5\a\a\0\2\z\8\m\5\q\1\7\g\o\3\q\l\c\v\j\d\a\8\q\s\8\t\9\p\r\n\s\s\6\e\f\u\2\8\h\u\6\l\q\j\h\1\y\o\z\1\x\4\z\c\v\k\v\z\b\h\s\2\h\v\3\k\c\z\r\3\d\x\b\p\7\e\n\2\w\a\f\m\6\p\f\y\6\z\s\r\i\e\p\1\w\0\1\5\s\g\7\t\b\6\r\j\6\z\s\b\b\2\7\8\a\y\x\n\2\f\p\z\7\p\u\h\c\m\w\f\w\u\b\f\b\j\d\9\y\i\q\h\d\9\y\q\w\g\0\s\f\z\b\w\s\u\g\b\8\x\0\t\6\j\o\f\e\x\s\s\6\g\b\i\b\6\u\q\w\4\2\1\6\p\t\d\x\o\g\k\k\m\4\1\z\d\x\z\0\o\c\k\j\5\p\2\2\5\p\n\9\4\i\r\n\s\u\5\s\3\7\r\e\5\8\0\l\3\s\w\0\f\p\w\9\q\k\4\m\p\v\q\3\c\5\m\0\o\y\m\m\5\j\y\e\q\2\y\5\0\0\z\5\2\i\3\a\y\9\m\h\8\r\y\1\2\d\x\c\b\v\a\0\y\4\r\f\3\i\4\x\0\t\8\w\2\6\b\h\h\t\z\x\b\5\p\6\2\v\t\t\8\c\x\z\p\0\v\v\d\k\2\0\i\z\l\d\9\8\3\b\t\w\x\n\l\7\i\1\l\8\a\z\t\b\u\1\f\p\1\y\k\m\l\y\k\m\f\2\w\d\y\h\1\2\2\b\q\y\3\b\r\4\0\7\q\o\p\k\w\v\h\p\3\7\t\h\2\b\7\q\o\k\s\1\b\5\g\7\9\s\f\q\n\g\l\d\c\n\4\f\i\m\9\z\q\a\k\j\b\q\e\e\n\p\g\2\a\6\f\4\6\t\f\w\r\7\u\f\e\3\5\k\j\c\5\m\z\e\u\b\a\0\7\s\b\h\w\0\2\5\8\n\z\6\a\v\9\u\8\k\m\w\r\5\5\b\1\4\6\t\k\1\y\s\e\u\t\a\t\k\3\j\4\e\g\f\2\5\6\b\v\d\f\p\8\i\6\s\c\l\2\o\p\8\g\u\d\e\w\s\1\f\b\h\q\o\f\c\9\p\g\5\5\a\v\r\f\j\2\b\q\r\r\b\q\8\v\r\4\o\n\j\c\z\k\h\f\b\e\3\u\x\6\0\1\v\o\n\a\1\k\3\v\n\p\5\p\x\p\l\a\o\i\2\u\6\d\3\g\6\x\r\i\9\8\3\i\5\f\2\q\a\6\7\q\7\2\8\k\m\e\4\b\f\6\i\r\l\7\z\c\d\d\c\7\z\i\z\g\3\a\a\l\x\k\m\5\4\u\3\7\z\n\2\p\o\z\3\k\z\l\o\l\i\s\g\e\4\b\m\l\y\q\e\m\5\3\9\u\3\9\m\c\p\k\w\w\7\i\f\5\l\0\p\u\p\8\w\4\l\0\d\x\2\i\g\4\7\j\i\l\7\2\u\n\4\x\s\q\l\u\j\r\k\n\x\9\i\o\7\x\r\1\z\v\f\a\a\t\4\z\2\v\y\q\v\d\i\l\y\w\d\h\5\a\f\0\6\s\1\5\u\q\o\9\n\z\v\o\j\m\v\6\6\9\m\z\c\n\i\6\1\v\f\7\4\y\m\q\1\u\i\g\q\6\t\o\m\m\1\g\e\m\h\0\8\b\i\b\7\k\d\y\d\f\g\0\l\i\5\d\7\a\y\3\9\d\h\q\d\o\p\m\5\3\c\f\m\l\y\x\z\r\y\u\w\o\3\4\w\i\p\q\1\3\t\e\y\s\p\x\v\m\0\3\7\b\1\k\u\3\d\w\q\y\r\j\n\r\i\n\3\z\m\3\8\k\v\0\t\b\4\0\r\o\f\z\4\g\s\n\7\c\c\6\z\a\k\3\g\s\2\y\6\u\i\p\i\j\w\z\y\i\v\6\s\v\1\c\k\n\b\k\s\p\5\t\m\5\6\y\2\f\y\a\s\h\x\y\y\9\2\i\a\o\i\v\i\3\b\t\2\p\e\5\w\2\t\t\i\o\x\q\1\u\1\p\k\k\c\1\z\h\f\i\w\d\c\l\n\e\d\4\o\5\2\1\l\w\p\w\0\3\s\4\7\y\g\3\m\7\9\o\1\e\x\v\d\b\3\3\a\j\5\k\0\a\8\r\b\w\9\p\b\q\8\l\i\7\g\c\a\m\u\y\6\7\2\y\h\i\9\4\r\w\r\n\8\4\h\f\y\s\u\a\m\i\2\b\a\k\y\v\4\g\d\v\f\d\v\5\3\i\e\g\p\4\d\i\a\i\5\5\9\9\y\c\x\u\9\w\x\a\9\d\l\i\n\h\v\n\l\o\9\r\k\x\m\f\s\6\q\e\3\1\t\v\b\y\2\t\v\2\a\1\2\g\r\0\9\d\r\r\p\s\c\t\x\v\s\d\j\j\e\d\d\e\v\z\c\6\w\n\t\l\w\i\j\w\l\c\6\d\u\w\b\5\f\0\f\k\e\h\5\3\l\g\x\r\9\b\l\u\c\m\0\l\0\3\p\1\m\t\n\7\d\c\q\z\s\z\u\6\d\r\z\6\9\e\e\x\e\b\3\n\e\t\r\p\p\v\5\z\5\4\9\l\a\9\p\a\t\l\d\h\f\a\y\i\y\s\u\k\2\7\3\4\n\s\s\e\k\p\4\v\0\f\p\l\3\s\s\a\o\7\g\3\p\3\s\a\p\a\v\b\x\g\6\g\b\l\z\t\o\9\q\r\c\e\s\f\h\b\a\u\6\8\q\f\m\o\5\8\r\8\2\x\p\n\5\v\2\t\1\u\j\8\2\q\r\f\6\x\x\w\5\x\m\3\t\o\g\t\u\6\9\8\3\6\f\3\2\m\4\n\a\p\r\p\a\y\0\p\g\x\3\r\j\2\z\9\z\d\d\3\f\9\k\u\u\i\z\y\m\m\1\x\d\k\g\o\4\e\5\o\n\y\i\7\3\v\n\0\3\k\n\0\8\1\2\h\c\i\k\m\v\t\y\u\e\e\7\p\5\d\f\d\0\y\z\l\c\x\u\v\6\d\5\3\a\i\a\g\h\r\s\z\h\k\p\0\z\y\k\f\4\l\2\8\b\1\k\a\3\1\a\j\5\8\f\0\0\3\s\b\i\l\b\f\l\p\b\y\b\e\0\4\n\v\9\7\7\j\t\l\5\h\u\k\4\q\u\x\g\n\s\7\k\j\j\y\c\r\l\s\0\q\l\7\z\w\2\i\f\n\2\z\q\p\4\0\q\4\i\9\j\h\3\j\9\e\4\d\5\7\u\k\v\3\p\s\1\7\5\p\f\i\8\z\n\6\8\9\u\6\t\s\w\o\c\4\k\5\8\m\o\6\i\w\s\p\o\i\x\s\c\u\s\y\y\m\s\k\5\0\w\g\5\c\7\p\h\9\w\d\m\4\3\u\j\x\a\v\t\3\5\k\i\a\0\2\7\f\e\w\v\e\s\u\j\v\2\2\u\6\9\0\e\w\n\g\5\a\d\9\5\1\v\q\j\y\c\0\o\d\y\f\r\d\c\q\k\i\1\x\1\g\8\c\m\n\k\0\8\j\d\k\m\t\x\n\0\1\7\b\4\q\j\g\1\z\k\o\4\p\a\n\j\t\w\v\b\a\a\i\0\j\h\7\w\w\s\j\y\n\y\s\9\z\f\t\s\a\p\t\l\1\o\j\g\c\3\o\w\7\z\n\1\f\z\t\q\8\0\l\m\4\w\y\p\h\h\i\v\7\k\r\f\7\8\v\w\v\0\g\g\l\a\w\m\8\n\p\d\g\5\0\n\4\p\q\r\l\c\x\q\z\e\c\a\k\r\o\n\1\v\5\z\e\s\a\1\x\7\y\j\3\i\x\u\v\r\t\c\3\f\0\d\r\s\n\f\p\c\u\s\k\d\2\u\m\h\3\m\0\t\r\q\d\n\j\5\n\p\k\9\5\f\d\j\p\t\9\e\a\v\m\k\a\j\r\p\u\w\p\w\z\k\9\8\3\9\d\w\o\z\8\d\h\5\d\i\h\b\1\j\o\i\n\v\q\1\8\6\o\6\8\9\w\a\i\v\l\b\6\l\j\b\j\3\5\x\h\t\s\v\a\u\4\8\h\g\5\9\p\p\d\u\d\l\x\u\a\k\y\p\w\h\w\r\p\d\0\h\z\8\d\y\s\r\5\z\o\x\6\x\6\i\l\l\o\2\i\g\r\e\k\t\m\0\v\c\q\m\7\g\q\3\m\3\9\u\e\r\m\5\5\6\t\y\a\4\0\l\g\5\i\c\y\c\f\q\q\j\s\5\5\3\4\r\3\k\5\z\w\8\a\e\o\7\y\t\u\o\l\x\2\k\j\i\b\0\4\j\p\v\y\w\0\p\g\s\4\h\r\j\j\t\6\w\d\s\1\f\n\v\3\v\w\k\b\o\c\0\m\o\c\1\i\p\t\s\t\0\m\e\5\l\e\d\t\p\f\a\y\j\8\k\b\5\2\o\3\s\4\u\u\f\q\5\7\9\o\c\2\1\3\x\h\a\w\b\n\1\n\d\6\6\y\j\c\s\o\2\w\s\y\f\u\z\y\v\f\a\k\7\e\r\5\l\j\k\c\3\o\u\h\p\t\5\2\v\o\g\a\v\v\6\9\p\d\3\n\6\1\y\e\i\f\6\f\k\f\x\n\r\4\c\9\m\g\7\r\8\j\6\y\0\2\j\s\5\h\2\r\w\q\o\e\r\a\z\7\l\f\1\i\1\3\i\q\c\a\1\l\l\o\0\u\h\m\u\3\r\9\z\w\1\w\6\1\q\c\d\7\1\i\p\c\5\0\w\7\o\h\q\u\4\k\4\i\m\q\z\i\y\n\p\r\7\b\e\h\f\q\3\o\8\b\w\o\c\x\f\u\0\c\4\n\7\g\i\v\h\q\z\m\x\i\w\1\7\5\c\7\x\g\q\5\n\3\n\h\x\2\h\6\s\j\7\9\u\y\9\x\m\o\t\8\x\0\p\h\5\e\5\q\a\2\m\z\l\r\q\p\8\d\u\2\e\c\7\m\6\y\4\c\f\u\v\2\7\a\9\i\z\p\u\s\7\n\j\m\2\4\b\p\g\f\b\i\k\l\m\3\p\8\h\7\8\o\8\y\h\2\w\u\x\2\n\7\o\q\6\p\f\0\e\m\3\l\j\4\4\s\3\b\l\v\j\2\p\j\y\3\5\r\b\y\v\n\u\0\v\3\9\d\e\x\p\c\o\l\j\3\i\c\1\j\y\j\l\c\e\n\2\p\n\d\s\3\f\i\b\h\9\p\1\d\0\0\h\1\o\g\6\5\l\l\c\0\z\x\x\c\z\c\8\o\c\5\1\y\g\g\r\c\3\v\i\s\v\i\k\3\b\a\v\h\b\m\2\m\u\g\p\z\k\o\3\i\d\0\d\k\h\k\z\g\k\c\6\s\d\8\r\p\2\f\v\o\3\3\8\p\5\l\h\s\6\o\1\4\q\g\6\4\6\l\e\8\w\6\9\9\3\4\p\s\j\y\b\r\2\9\v\w\6\6\o\e\b\r\8\b\6\9\f\k\l\k\q\p\c\4\u\y\k\f\5\7\r\f\5\u\y\n\g\l\k\n\g\7\j\e\l\8\x\6\q\0\o\6\h\o\4\j\r\q\c\g\v\4\k\n\2\2\o\l\d\b\z\e\p\b\3\p\k\m\o\8\o\x\p\l\p\s\i\3\k\p\m\e\b\1\0\3\u\6\8\m\s\f\0\p\c\0\l\2\e\3\1\d\9\l\4\c\d\t\a\v\o\y\t\g\j\x\i\1\4\s\7\b\m\y\m\x\m\k\e\g\n\x\j\v\v\p\8\i\a\l\4\m\z\6\g\d\g\q\3\6\w\s\e\w\b\u\l\p\i\u\3\o\4\k\7\b\q\j\7\0\7\5\g\t\g\c\n\o\2\q\8\l\d\e\b\x\i\e\5\t\f\a\e\q\4\r\8\4\h\o\0\2\f\g\t\u\a\0\a\1\0\0\g\b\i\t\0\2\o\0\v\7\m\x\f\a\u\a\n\n\0\b\7\m\l\u\p\h\t\i\0\1\f\b\n\y\v\f\o\q\m\g\o\3\4\2\d\z\3\8\q\w\3\n\x\k\f\v\9\q\3\b\1\j\c\o\8\q\j\x\e\m\3\2\7\7\y\2\9\k\1\y\v\8\y\o\y\4\f\c\n\y\n\9\n\e\5\t\w\2\s\j\h\u\t\m\l\6\x\z\7\g\r\i\v\4\3\s\k\b\d\z\s\g\6\0\f\n\w\h\b\4\t\g\j\n\v\t\e\f\i\7\j\n\k\r\g\o\o\r\e\6\6\g\g\c\i\u\p\8\4\w\a\i\p\m\4\n\3\y\a\w\b\g\e\m\o\5\v\r\7\c\y\4\y\z\m\m\1\n\k\j\t\4\j\i\r\b\a\f\8\x\k\d\4\8\g\u\b\g\0\v\f\x\m\s\b\q\8\n\n\d\s\e\8\b\c\9\2\l\o\h\k\a\3\j\a\8\p\y\9\3\5\y\d\c\0\v\i\h\9\o\h\n\j\0\c\m\1\c\6\q\h\r\z\z\l\9\l\0\i\1\z\h\h\l\3\4\3\b\z\b\8\7\s\b\8\z\m\3\s\l\y\k\3\o\v\i\0\e\l\e\0\i\a\z\2\z\2\4\7\f\u\j\m\7\8\w\3\4\c\9\6\v\r\9\q\l\y\p\i\4\o\f\w\b\z\1\b\2\u\p\t\p\v\g\w\h\i\e\9\0\5\h\7\b\j\u\m\o\b\c\q\h\j\y\q\8\p\8\g\i\6\j\0\m\4\7\i\b\f\p\s\l\w\h\x\p\a\6\s\j\s\r\c\2\b\4\4\n\6\1\d\q\a\a\8\8\p\i\c\8\x\5\k\y\i\a\l\l\p\a\8\x\s\k\9\b\g\b\2\n\a\d\k\4\0\j\x\2\5\0\g\v\h\7\e\b\i\j\v\9\h\k\e\5\1\b\x\u\m\6\y\y\5\h\8\i\c\z\q\g\f\n\a\5\4\9\l\t\p\6\c\5\e\u\u\k\9\x\w\n\h\d\g\r\s\s\2\w\i\z\5\4\u\z\x\u\n\u\t\o\n\n\z\8\c\4\n\i\j\s\l\4\q\y\q\i\g\4\i\q\w\t\o\6\d\4\4\5\9\h\b\1\2\b\6\k\v\u\1\m\s\v\9\d\u\k\e\l\c\j\8\m\g\6\6\y\z\9\2\0\l\u\l\d\t\k\6\l\t\m\d\6\x\c\o\g\8\c\p\r\6\4\f\a\f\a\3\9\t\b\c\j\g\6\w\c\g\1\8\8\f\5\x\h\q\w\x\v\g\o\f\d\2\2\u\i\a\i\m\q\j\h\0\e\3\1\h\s\v\6\r\9\x\s\j\n\i\r\b\u\a\0\h\i\z\6\j\s\w\z\e\q\m\p\2\9\e\f\p\m\7\8\i\e\v\6\e\4\o\j\c\d\e\0\v\h\p\5\m\o\w\6\4\h\v\v\3\k\r\x\1\3\3\h\7\8\j\3\6\r\k\n\r\4\6\h\2\6\s\g\8\b\5\o\m\u\6\g\p\j\m\l\4\d\e\m\g\j\3\m\v\j\7\h\6\8\m\s\q\t\k\o\e\5\n\7\7\r\l\e\t\9\b\b\k\y\0\m\k\r\g\2\q\o\l\h\h\5\2\8\k\d\c\u\k\4\p\5\1\3\7\4\t\7\1\j\j\7\w\g\h\v\6\n\r\x\m\a\i\5\h\s\9\l\a\c\t\z\l\0\c\x\m\w\m\8\p\a\k\g\t\2\1\0\t\4\r\c\b\5\n\l\m\5\u\c\h\d\x\5\u\e\c\6\a\a\a\u\j\a\h\s\s\7\1\h\b\3\8\o\d\s\u\p\0\b\b\4\p\r\8\e\h\d\k\0\6\c\7\5\2\v\z\0\z\f\o\f\o\f\b\q\8\y\n\5\g\7\9\q\0\p\2\z\p\8\w\u\d\c\n\2\n\u\t\b\7\j\2\j\1\d\p\o\k\b\n\6\w\w\y\r\u\i\n\k\t\7\n\e\1\r\g\o\q\j\m\s\v\z\m\f\z\a\7\8\i\m\m\r\s\z\p\u\b\e\z\a\j\p\y\p\s\t\5\n\g\8\f\q\p\t\f\m\8\h\0\m\g\r\m\u\w\7\s\7\6\2\v\8\w\y\8\k\g\y\h\m\z\d\u\m\9\q\s\v\r\x\e\3\r\j\8\o\o\m\6\b\y\l\d\v\r\f\h\p\y\w\b\f\9\c\i\i\x\u\l\b\t\v\l\9\g\2\7\5\i\3\2\f\o\c\v\i\1\n\p\a\i\3\a\c\p\o\q\d\r\c\3\c\r\7\u\s\v\q\p\l\r\m\h\j\p\c\t\r\c\n\d\v\m\h\y\5\3\o\z\0\t\j\9\f\w\8\t\i\2\u\1\k\m\u\1 ]] 00:19:11.162 ************************************ 00:19:11.162 END TEST dd_rw_offset 00:19:11.162 ************************************ 00:19:11.162 00:19:11.162 real 0m5.193s 00:19:11.162 user 0m4.492s 00:19:11.162 sys 0m2.274s 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:19:11.162 16:57:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:19:11.162 { 00:19:11.162 "subsystems": [ 00:19:11.162 { 00:19:11.162 "subsystem": "bdev", 00:19:11.162 "config": [ 00:19:11.162 { 00:19:11.162 "params": { 00:19:11.162 "trtype": "pcie", 00:19:11.162 "traddr": "0000:00:10.0", 00:19:11.162 "name": "Nvme0" 00:19:11.162 }, 00:19:11.162 "method": "bdev_nvme_attach_controller" 00:19:11.162 }, 00:19:11.162 { 00:19:11.162 "method": "bdev_wait_for_examine" 00:19:11.162 } 00:19:11.162 ] 00:19:11.162 } 00:19:11.162 ] 00:19:11.162 } 00:19:11.162 [2024-07-22 16:57:12.730161] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:11.163 [2024-07-22 16:57:12.730320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65743 ] 00:19:11.428 [2024-07-22 16:57:12.898371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.685 [2024-07-22 16:57:13.177468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.943 [2024-07-22 16:57:13.470962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:14.100  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:14.100 00:19:14.100 16:57:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:14.100 00:19:14.100 real 0m59.718s 00:19:14.100 user 0m51.192s 00:19:14.100 sys 0m24.798s 00:19:14.100 16:57:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.100 ************************************ 00:19:14.100 END TEST spdk_dd_basic_rw 00:19:14.100 ************************************ 00:19:14.100 16:57:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:19:14.100 16:57:15 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:19:14.100 16:57:15 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:19:14.100 16:57:15 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:14.100 16:57:15 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.100 16:57:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:19:14.100 ************************************ 00:19:14.100 START TEST spdk_dd_posix 00:19:14.100 ************************************ 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:19:14.100 * Looking for test storage... 00:19:14.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:19:14.100 * First test run, liburing in use 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:19:14.100 ************************************ 00:19:14.100 START TEST dd_flag_append 00:19:14.100 ************************************ 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=8om0zhjk7ilep4u6fzgjjmbwcnqwjzbi 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=5el76l2utb00bifr2jxrqc9pv2gnhynr 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 8om0zhjk7ilep4u6fzgjjmbwcnqwjzbi 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 5el76l2utb00bifr2jxrqc9pv2gnhynr 00:19:14.100 16:57:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:19:14.100 [2024-07-22 16:57:15.612342] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:14.100 [2024-07-22 16:57:15.613374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65824 ] 00:19:14.357 [2024-07-22 16:57:15.803082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.614 [2024-07-22 16:57:16.116751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.872 [2024-07-22 16:57:16.395990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:16.529  Copying: 32/32 [B] (average 31 kBps) 00:19:16.529 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 5el76l2utb00bifr2jxrqc9pv2gnhynr8om0zhjk7ilep4u6fzgjjmbwcnqwjzbi == \5\e\l\7\6\l\2\u\t\b\0\0\b\i\f\r\2\j\x\r\q\c\9\p\v\2\g\n\h\y\n\r\8\o\m\0\z\h\j\k\7\i\l\e\p\4\u\6\f\z\g\j\j\m\b\w\c\n\q\w\j\z\b\i ]] 00:19:16.529 00:19:16.529 real 0m2.622s 00:19:16.529 user 0m2.184s 00:19:16.529 sys 0m1.303s 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:19:16.529 ************************************ 00:19:16.529 END TEST dd_flag_append 00:19:16.529 ************************************ 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:19:16.529 ************************************ 00:19:16.529 START TEST dd_flag_directory 00:19:16.529 ************************************ 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.529 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:16.787 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.787 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:16.787 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.787 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:16.787 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:16.787 16:57:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:16.787 [2024-07-22 16:57:18.292793] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:16.787 [2024-07-22 16:57:18.292974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65876 ] 00:19:17.044 [2024-07-22 16:57:18.489713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.301 [2024-07-22 16:57:18.907935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.866 [2024-07-22 16:57:19.275737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:17.866 [2024-07-22 16:57:19.428146] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:17.866 [2024-07-22 16:57:19.428231] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:17.866 [2024-07-22 16:57:19.428277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:18.878 [2024-07-22 16:57:20.423527] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.444 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.445 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.445 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.445 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.445 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.445 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:19.445 16:57:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:19.702 [2024-07-22 16:57:21.105194] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:19.702 [2024-07-22 16:57:21.105385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65909 ] 00:19:19.702 [2024-07-22 16:57:21.291871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.272 [2024-07-22 16:57:21.669115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.530 [2024-07-22 16:57:21.966542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:20.530 [2024-07-22 16:57:22.131445] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:20.530 [2024-07-22 16:57:22.131512] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:20.530 [2024-07-22 16:57:22.131537] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:21.470 [2024-07-22 16:57:23.070076] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:22.036 00:19:22.036 real 0m5.487s 00:19:22.036 user 0m4.599s 00:19:22.036 sys 0m0.642s 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.036 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:19:22.036 ************************************ 00:19:22.036 END TEST dd_flag_directory 00:19:22.037 ************************************ 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:19:22.296 ************************************ 00:19:22.296 START TEST dd_flag_nofollow 00:19:22.296 ************************************ 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:22.296 16:57:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:22.296 [2024-07-22 16:57:23.839414] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:22.296 [2024-07-22 16:57:23.839613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65966 ] 00:19:22.555 [2024-07-22 16:57:24.025219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.814 [2024-07-22 16:57:24.376115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.072 [2024-07-22 16:57:24.676988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:23.388 [2024-07-22 16:57:24.819626] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:23.388 [2024-07-22 16:57:24.819699] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:23.388 [2024-07-22 16:57:24.819741] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:24.344 [2024-07-22 16:57:25.770409] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:24.912 16:57:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:24.912 [2024-07-22 16:57:26.422647] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:24.912 [2024-07-22 16:57:26.422796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65993 ] 00:19:25.170 [2024-07-22 16:57:26.595757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.434 [2024-07-22 16:57:26.946090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.724 [2024-07-22 16:57:27.261599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:25.982 [2024-07-22 16:57:27.405005] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:25.982 [2024-07-22 16:57:27.405093] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:25.982 [2024-07-22 16:57:27.405133] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:26.920 [2024-07-22 16:57:28.408334] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:19:27.488 16:57:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:27.488 [2024-07-22 16:57:29.102226] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:27.488 [2024-07-22 16:57:29.102423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66024 ] 00:19:27.746 [2024-07-22 16:57:29.276620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.005 [2024-07-22 16:57:29.565183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.263 [2024-07-22 16:57:29.855028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:30.423  Copying: 512/512 [B] (average 500 kBps) 00:19:30.423 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 8pgtq95zp6zlos05njrhw6wphmzl4qsl62dnsp5irq3zzviy4fcmplgnfw9a2byfz7h2ur6a17f48mp5q9wct9m1ws65qwit3uxb035xop339jgjlwlribmys91zo46iziro4cnmxegeqc5kxszuuguj45v0pcnl6d17hd00pja5v57rbu6pzvvahfogu5zyy2iqaj8xcug8mfclw5hkom96bkrwngengxai6nakx4zw2v5xpyn6u558jiebq7d7l556illtuwtm35bcsudoquc1yzeo1afyuangxnlvrwk4xm2yum8lcf6hsoxqaihg0803noni2qfvsrp18cyvrw0k9y86q8pyus1yux1sd8gd2rznia8rs9q2q9jvw0ev344qam0kw37c4rcs9vg2fk2cru3ff1ef67y487luvlp3q6pf2aqnm6opijhxskmdmjshvmwjjlunklqcuk4irhz1vwog7s6dbsdvoe6kf5a17o474pd9tc16ujcmhpao == \8\p\g\t\q\9\5\z\p\6\z\l\o\s\0\5\n\j\r\h\w\6\w\p\h\m\z\l\4\q\s\l\6\2\d\n\s\p\5\i\r\q\3\z\z\v\i\y\4\f\c\m\p\l\g\n\f\w\9\a\2\b\y\f\z\7\h\2\u\r\6\a\1\7\f\4\8\m\p\5\q\9\w\c\t\9\m\1\w\s\6\5\q\w\i\t\3\u\x\b\0\3\5\x\o\p\3\3\9\j\g\j\l\w\l\r\i\b\m\y\s\9\1\z\o\4\6\i\z\i\r\o\4\c\n\m\x\e\g\e\q\c\5\k\x\s\z\u\u\g\u\j\4\5\v\0\p\c\n\l\6\d\1\7\h\d\0\0\p\j\a\5\v\5\7\r\b\u\6\p\z\v\v\a\h\f\o\g\u\5\z\y\y\2\i\q\a\j\8\x\c\u\g\8\m\f\c\l\w\5\h\k\o\m\9\6\b\k\r\w\n\g\e\n\g\x\a\i\6\n\a\k\x\4\z\w\2\v\5\x\p\y\n\6\u\5\5\8\j\i\e\b\q\7\d\7\l\5\5\6\i\l\l\t\u\w\t\m\3\5\b\c\s\u\d\o\q\u\c\1\y\z\e\o\1\a\f\y\u\a\n\g\x\n\l\v\r\w\k\4\x\m\2\y\u\m\8\l\c\f\6\h\s\o\x\q\a\i\h\g\0\8\0\3\n\o\n\i\2\q\f\v\s\r\p\1\8\c\y\v\r\w\0\k\9\y\8\6\q\8\p\y\u\s\1\y\u\x\1\s\d\8\g\d\2\r\z\n\i\a\8\r\s\9\q\2\q\9\j\v\w\0\e\v\3\4\4\q\a\m\0\k\w\3\7\c\4\r\c\s\9\v\g\2\f\k\2\c\r\u\3\f\f\1\e\f\6\7\y\4\8\7\l\u\v\l\p\3\q\6\p\f\2\a\q\n\m\6\o\p\i\j\h\x\s\k\m\d\m\j\s\h\v\m\w\j\j\l\u\n\k\l\q\c\u\k\4\i\r\h\z\1\v\w\o\g\7\s\6\d\b\s\d\v\o\e\6\k\f\5\a\1\7\o\4\7\4\p\d\9\t\c\1\6\u\j\c\m\h\p\a\o ]] 00:19:30.423 00:19:30.423 real 0m7.858s 00:19:30.423 user 0m6.706s 00:19:30.423 sys 0m1.776s 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:19:30.423 ************************************ 00:19:30.423 END TEST dd_flag_nofollow 00:19:30.423 ************************************ 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:19:30.423 ************************************ 00:19:30.423 START TEST dd_flag_noatime 00:19:30.423 ************************************ 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721667450 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721667451 00:19:30.423 16:57:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:19:31.365 16:57:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:31.365 [2024-07-22 16:57:32.719763] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:31.365 [2024-07-22 16:57:32.719917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66089 ] 00:19:31.365 [2024-07-22 16:57:32.890220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.655 [2024-07-22 16:57:33.237997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.222 [2024-07-22 16:57:33.537878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:33.636  Copying: 512/512 [B] (average 500 kBps) 00:19:33.636 00:19:33.636 16:57:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:33.636 16:57:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721667450 )) 00:19:33.636 16:57:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:33.636 16:57:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721667451 )) 00:19:33.636 16:57:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:33.894 [2024-07-22 16:57:35.350644] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:33.894 [2024-07-22 16:57:35.350828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66126 ] 00:19:34.152 [2024-07-22 16:57:35.534402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.409 [2024-07-22 16:57:35.890859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.666 [2024-07-22 16:57:36.168811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:36.296  Copying: 512/512 [B] (average 500 kBps) 00:19:36.296 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721667456 )) 00:19:36.296 00:19:36.296 real 0m6.231s 00:19:36.296 user 0m4.427s 00:19:36.296 sys 0m2.531s 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:36.296 ************************************ 00:19:36.296 END TEST dd_flag_noatime 00:19:36.296 ************************************ 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:19:36.296 ************************************ 00:19:36.296 START TEST dd_flags_misc 00:19:36.296 ************************************ 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:36.296 16:57:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:19:36.555 [2024-07-22 16:57:37.989374] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:36.555 [2024-07-22 16:57:37.989514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66172 ] 00:19:36.555 [2024-07-22 16:57:38.160898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.813 [2024-07-22 16:57:38.429032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.379 [2024-07-22 16:57:38.701484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:38.755  Copying: 512/512 [B] (average 500 kBps) 00:19:38.755 00:19:38.755 16:57:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yn8jtszxis59fe2mrxxokikinyouemi6z5fqfxay39egfk3sfhco7gi7gg28z2rcm5qsv69szkneplvg24odnsvxmj6vzo6hm92rbmwddpt5uavju1lao4jeol6aumwl5yxl87qlu1x727wn2m1zhzktf30nzowzs27s0vn26xsf6ejzi6ir4517ysm00fp9zl6uwhfumf1pocaiwwsnewlt0rrhz0lh1cne21ml1rd31ofi37z1ixg7t9aw82pri5roravzksfw654kb2a4qf9e8eqfnn0hwswa9hyb59qthih72mmrpbkwp5jkzbw9vva0ay5bfdncg7s55kn8c5j5l322049m9sri999n3ebm3r9w1a51d669gew4ng3p7riabnvybqah516vcmmz755l5aotdho8cvir4ket3rbtxs4txjcfmx2h14mp5idau3sybghr20j77o00dckpt2quc9g5xd0y3te9neacqaxv66ard4ellj7pmy17biob == \y\n\8\j\t\s\z\x\i\s\5\9\f\e\2\m\r\x\x\o\k\i\k\i\n\y\o\u\e\m\i\6\z\5\f\q\f\x\a\y\3\9\e\g\f\k\3\s\f\h\c\o\7\g\i\7\g\g\2\8\z\2\r\c\m\5\q\s\v\6\9\s\z\k\n\e\p\l\v\g\2\4\o\d\n\s\v\x\m\j\6\v\z\o\6\h\m\9\2\r\b\m\w\d\d\p\t\5\u\a\v\j\u\1\l\a\o\4\j\e\o\l\6\a\u\m\w\l\5\y\x\l\8\7\q\l\u\1\x\7\2\7\w\n\2\m\1\z\h\z\k\t\f\3\0\n\z\o\w\z\s\2\7\s\0\v\n\2\6\x\s\f\6\e\j\z\i\6\i\r\4\5\1\7\y\s\m\0\0\f\p\9\z\l\6\u\w\h\f\u\m\f\1\p\o\c\a\i\w\w\s\n\e\w\l\t\0\r\r\h\z\0\l\h\1\c\n\e\2\1\m\l\1\r\d\3\1\o\f\i\3\7\z\1\i\x\g\7\t\9\a\w\8\2\p\r\i\5\r\o\r\a\v\z\k\s\f\w\6\5\4\k\b\2\a\4\q\f\9\e\8\e\q\f\n\n\0\h\w\s\w\a\9\h\y\b\5\9\q\t\h\i\h\7\2\m\m\r\p\b\k\w\p\5\j\k\z\b\w\9\v\v\a\0\a\y\5\b\f\d\n\c\g\7\s\5\5\k\n\8\c\5\j\5\l\3\2\2\0\4\9\m\9\s\r\i\9\9\9\n\3\e\b\m\3\r\9\w\1\a\5\1\d\6\6\9\g\e\w\4\n\g\3\p\7\r\i\a\b\n\v\y\b\q\a\h\5\1\6\v\c\m\m\z\7\5\5\l\5\a\o\t\d\h\o\8\c\v\i\r\4\k\e\t\3\r\b\t\x\s\4\t\x\j\c\f\m\x\2\h\1\4\m\p\5\i\d\a\u\3\s\y\b\g\h\r\2\0\j\7\7\o\0\0\d\c\k\p\t\2\q\u\c\9\g\5\x\d\0\y\3\t\e\9\n\e\a\c\q\a\x\v\6\6\a\r\d\4\e\l\l\j\7\p\m\y\1\7\b\i\o\b ]] 00:19:38.755 16:57:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:38.755 16:57:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:39.029 [2024-07-22 16:57:40.475048] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:39.029 [2024-07-22 16:57:40.475231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66209 ] 00:19:39.286 [2024-07-22 16:57:40.656056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.545 [2024-07-22 16:57:40.932367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.803 [2024-07-22 16:57:41.203578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:41.699  Copying: 512/512 [B] (average 500 kBps) 00:19:41.699 00:19:41.700 16:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yn8jtszxis59fe2mrxxokikinyouemi6z5fqfxay39egfk3sfhco7gi7gg28z2rcm5qsv69szkneplvg24odnsvxmj6vzo6hm92rbmwddpt5uavju1lao4jeol6aumwl5yxl87qlu1x727wn2m1zhzktf30nzowzs27s0vn26xsf6ejzi6ir4517ysm00fp9zl6uwhfumf1pocaiwwsnewlt0rrhz0lh1cne21ml1rd31ofi37z1ixg7t9aw82pri5roravzksfw654kb2a4qf9e8eqfnn0hwswa9hyb59qthih72mmrpbkwp5jkzbw9vva0ay5bfdncg7s55kn8c5j5l322049m9sri999n3ebm3r9w1a51d669gew4ng3p7riabnvybqah516vcmmz755l5aotdho8cvir4ket3rbtxs4txjcfmx2h14mp5idau3sybghr20j77o00dckpt2quc9g5xd0y3te9neacqaxv66ard4ellj7pmy17biob == \y\n\8\j\t\s\z\x\i\s\5\9\f\e\2\m\r\x\x\o\k\i\k\i\n\y\o\u\e\m\i\6\z\5\f\q\f\x\a\y\3\9\e\g\f\k\3\s\f\h\c\o\7\g\i\7\g\g\2\8\z\2\r\c\m\5\q\s\v\6\9\s\z\k\n\e\p\l\v\g\2\4\o\d\n\s\v\x\m\j\6\v\z\o\6\h\m\9\2\r\b\m\w\d\d\p\t\5\u\a\v\j\u\1\l\a\o\4\j\e\o\l\6\a\u\m\w\l\5\y\x\l\8\7\q\l\u\1\x\7\2\7\w\n\2\m\1\z\h\z\k\t\f\3\0\n\z\o\w\z\s\2\7\s\0\v\n\2\6\x\s\f\6\e\j\z\i\6\i\r\4\5\1\7\y\s\m\0\0\f\p\9\z\l\6\u\w\h\f\u\m\f\1\p\o\c\a\i\w\w\s\n\e\w\l\t\0\r\r\h\z\0\l\h\1\c\n\e\2\1\m\l\1\r\d\3\1\o\f\i\3\7\z\1\i\x\g\7\t\9\a\w\8\2\p\r\i\5\r\o\r\a\v\z\k\s\f\w\6\5\4\k\b\2\a\4\q\f\9\e\8\e\q\f\n\n\0\h\w\s\w\a\9\h\y\b\5\9\q\t\h\i\h\7\2\m\m\r\p\b\k\w\p\5\j\k\z\b\w\9\v\v\a\0\a\y\5\b\f\d\n\c\g\7\s\5\5\k\n\8\c\5\j\5\l\3\2\2\0\4\9\m\9\s\r\i\9\9\9\n\3\e\b\m\3\r\9\w\1\a\5\1\d\6\6\9\g\e\w\4\n\g\3\p\7\r\i\a\b\n\v\y\b\q\a\h\5\1\6\v\c\m\m\z\7\5\5\l\5\a\o\t\d\h\o\8\c\v\i\r\4\k\e\t\3\r\b\t\x\s\4\t\x\j\c\f\m\x\2\h\1\4\m\p\5\i\d\a\u\3\s\y\b\g\h\r\2\0\j\7\7\o\0\0\d\c\k\p\t\2\q\u\c\9\g\5\x\d\0\y\3\t\e\9\n\e\a\c\q\a\x\v\6\6\a\r\d\4\e\l\l\j\7\p\m\y\1\7\b\i\o\b ]] 00:19:41.700 16:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:41.700 16:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:19:41.700 [2024-07-22 16:57:42.959841] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:41.700 [2024-07-22 16:57:42.960017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66237 ] 00:19:41.700 [2024-07-22 16:57:43.128411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.958 [2024-07-22 16:57:43.385108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.217 [2024-07-22 16:57:43.656681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:44.119  Copying: 512/512 [B] (average 250 kBps) 00:19:44.119 00:19:44.119 16:57:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yn8jtszxis59fe2mrxxokikinyouemi6z5fqfxay39egfk3sfhco7gi7gg28z2rcm5qsv69szkneplvg24odnsvxmj6vzo6hm92rbmwddpt5uavju1lao4jeol6aumwl5yxl87qlu1x727wn2m1zhzktf30nzowzs27s0vn26xsf6ejzi6ir4517ysm00fp9zl6uwhfumf1pocaiwwsnewlt0rrhz0lh1cne21ml1rd31ofi37z1ixg7t9aw82pri5roravzksfw654kb2a4qf9e8eqfnn0hwswa9hyb59qthih72mmrpbkwp5jkzbw9vva0ay5bfdncg7s55kn8c5j5l322049m9sri999n3ebm3r9w1a51d669gew4ng3p7riabnvybqah516vcmmz755l5aotdho8cvir4ket3rbtxs4txjcfmx2h14mp5idau3sybghr20j77o00dckpt2quc9g5xd0y3te9neacqaxv66ard4ellj7pmy17biob == \y\n\8\j\t\s\z\x\i\s\5\9\f\e\2\m\r\x\x\o\k\i\k\i\n\y\o\u\e\m\i\6\z\5\f\q\f\x\a\y\3\9\e\g\f\k\3\s\f\h\c\o\7\g\i\7\g\g\2\8\z\2\r\c\m\5\q\s\v\6\9\s\z\k\n\e\p\l\v\g\2\4\o\d\n\s\v\x\m\j\6\v\z\o\6\h\m\9\2\r\b\m\w\d\d\p\t\5\u\a\v\j\u\1\l\a\o\4\j\e\o\l\6\a\u\m\w\l\5\y\x\l\8\7\q\l\u\1\x\7\2\7\w\n\2\m\1\z\h\z\k\t\f\3\0\n\z\o\w\z\s\2\7\s\0\v\n\2\6\x\s\f\6\e\j\z\i\6\i\r\4\5\1\7\y\s\m\0\0\f\p\9\z\l\6\u\w\h\f\u\m\f\1\p\o\c\a\i\w\w\s\n\e\w\l\t\0\r\r\h\z\0\l\h\1\c\n\e\2\1\m\l\1\r\d\3\1\o\f\i\3\7\z\1\i\x\g\7\t\9\a\w\8\2\p\r\i\5\r\o\r\a\v\z\k\s\f\w\6\5\4\k\b\2\a\4\q\f\9\e\8\e\q\f\n\n\0\h\w\s\w\a\9\h\y\b\5\9\q\t\h\i\h\7\2\m\m\r\p\b\k\w\p\5\j\k\z\b\w\9\v\v\a\0\a\y\5\b\f\d\n\c\g\7\s\5\5\k\n\8\c\5\j\5\l\3\2\2\0\4\9\m\9\s\r\i\9\9\9\n\3\e\b\m\3\r\9\w\1\a\5\1\d\6\6\9\g\e\w\4\n\g\3\p\7\r\i\a\b\n\v\y\b\q\a\h\5\1\6\v\c\m\m\z\7\5\5\l\5\a\o\t\d\h\o\8\c\v\i\r\4\k\e\t\3\r\b\t\x\s\4\t\x\j\c\f\m\x\2\h\1\4\m\p\5\i\d\a\u\3\s\y\b\g\h\r\2\0\j\7\7\o\0\0\d\c\k\p\t\2\q\u\c\9\g\5\x\d\0\y\3\t\e\9\n\e\a\c\q\a\x\v\6\6\a\r\d\4\e\l\l\j\7\p\m\y\1\7\b\i\o\b ]] 00:19:44.119 16:57:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:44.119 16:57:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:19:44.119 [2024-07-22 16:57:45.372479] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:44.119 [2024-07-22 16:57:45.372634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66270 ] 00:19:44.119 [2024-07-22 16:57:45.543027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.378 [2024-07-22 16:57:45.874009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.635 [2024-07-22 16:57:46.149290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:46.267  Copying: 512/512 [B] (average 250 kBps) 00:19:46.267 00:19:46.267 16:57:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yn8jtszxis59fe2mrxxokikinyouemi6z5fqfxay39egfk3sfhco7gi7gg28z2rcm5qsv69szkneplvg24odnsvxmj6vzo6hm92rbmwddpt5uavju1lao4jeol6aumwl5yxl87qlu1x727wn2m1zhzktf30nzowzs27s0vn26xsf6ejzi6ir4517ysm00fp9zl6uwhfumf1pocaiwwsnewlt0rrhz0lh1cne21ml1rd31ofi37z1ixg7t9aw82pri5roravzksfw654kb2a4qf9e8eqfnn0hwswa9hyb59qthih72mmrpbkwp5jkzbw9vva0ay5bfdncg7s55kn8c5j5l322049m9sri999n3ebm3r9w1a51d669gew4ng3p7riabnvybqah516vcmmz755l5aotdho8cvir4ket3rbtxs4txjcfmx2h14mp5idau3sybghr20j77o00dckpt2quc9g5xd0y3te9neacqaxv66ard4ellj7pmy17biob == \y\n\8\j\t\s\z\x\i\s\5\9\f\e\2\m\r\x\x\o\k\i\k\i\n\y\o\u\e\m\i\6\z\5\f\q\f\x\a\y\3\9\e\g\f\k\3\s\f\h\c\o\7\g\i\7\g\g\2\8\z\2\r\c\m\5\q\s\v\6\9\s\z\k\n\e\p\l\v\g\2\4\o\d\n\s\v\x\m\j\6\v\z\o\6\h\m\9\2\r\b\m\w\d\d\p\t\5\u\a\v\j\u\1\l\a\o\4\j\e\o\l\6\a\u\m\w\l\5\y\x\l\8\7\q\l\u\1\x\7\2\7\w\n\2\m\1\z\h\z\k\t\f\3\0\n\z\o\w\z\s\2\7\s\0\v\n\2\6\x\s\f\6\e\j\z\i\6\i\r\4\5\1\7\y\s\m\0\0\f\p\9\z\l\6\u\w\h\f\u\m\f\1\p\o\c\a\i\w\w\s\n\e\w\l\t\0\r\r\h\z\0\l\h\1\c\n\e\2\1\m\l\1\r\d\3\1\o\f\i\3\7\z\1\i\x\g\7\t\9\a\w\8\2\p\r\i\5\r\o\r\a\v\z\k\s\f\w\6\5\4\k\b\2\a\4\q\f\9\e\8\e\q\f\n\n\0\h\w\s\w\a\9\h\y\b\5\9\q\t\h\i\h\7\2\m\m\r\p\b\k\w\p\5\j\k\z\b\w\9\v\v\a\0\a\y\5\b\f\d\n\c\g\7\s\5\5\k\n\8\c\5\j\5\l\3\2\2\0\4\9\m\9\s\r\i\9\9\9\n\3\e\b\m\3\r\9\w\1\a\5\1\d\6\6\9\g\e\w\4\n\g\3\p\7\r\i\a\b\n\v\y\b\q\a\h\5\1\6\v\c\m\m\z\7\5\5\l\5\a\o\t\d\h\o\8\c\v\i\r\4\k\e\t\3\r\b\t\x\s\4\t\x\j\c\f\m\x\2\h\1\4\m\p\5\i\d\a\u\3\s\y\b\g\h\r\2\0\j\7\7\o\0\0\d\c\k\p\t\2\q\u\c\9\g\5\x\d\0\y\3\t\e\9\n\e\a\c\q\a\x\v\6\6\a\r\d\4\e\l\l\j\7\p\m\y\1\7\b\i\o\b ]] 00:19:46.267 16:57:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:19:46.267 16:57:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:19:46.267 16:57:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:19:46.267 16:57:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:19:46.267 16:57:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:46.267 16:57:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:19:46.267 [2024-07-22 16:57:47.871473] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:46.267 [2024-07-22 16:57:47.871648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66297 ] 00:19:46.525 [2024-07-22 16:57:48.058198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.795 [2024-07-22 16:57:48.303475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.052 [2024-07-22 16:57:48.556686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:48.681  Copying: 512/512 [B] (average 500 kBps) 00:19:48.681 00:19:48.681 16:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ k4fycnfufayiv4cb4lmdbgt3ehg3whj84boxs4nmgo9i14igwtnrvaemf8yurgj2mm6a2et7gul52gg5anxge08ydxime3c126xzmluhguous4fwrrp36pqev3fqhkhuvpr48yipf5ohund5kw0b0qckp24kzlxdcxh2dr4v9c79axoxeuvfovqb3ko0th89angaombvybuxgew6wjqpbp3aafrj5ighp2ilbpgoopzn9exstol1ec4fmvvoft23n5dgnol7luag4uzsjkwa0np9vlqclp3lciokvif9e2tdiv8hb3v31scto367uu88ea6dd24klit0t294fsmb90s920s87zxx3xyc6d9z1kci1gpv5wdkisi43fwal8xo6cocmrb5k6j1y5zx6bk9hjzrsoq5j5x76aag7frf47t8w6ovp50yxpww2ti4m8r43u3h6awpanmmxdi2mem0b27h08892how71i2nvx6nfi0vmottmwiv3imhjuhxy77 == \k\4\f\y\c\n\f\u\f\a\y\i\v\4\c\b\4\l\m\d\b\g\t\3\e\h\g\3\w\h\j\8\4\b\o\x\s\4\n\m\g\o\9\i\1\4\i\g\w\t\n\r\v\a\e\m\f\8\y\u\r\g\j\2\m\m\6\a\2\e\t\7\g\u\l\5\2\g\g\5\a\n\x\g\e\0\8\y\d\x\i\m\e\3\c\1\2\6\x\z\m\l\u\h\g\u\o\u\s\4\f\w\r\r\p\3\6\p\q\e\v\3\f\q\h\k\h\u\v\p\r\4\8\y\i\p\f\5\o\h\u\n\d\5\k\w\0\b\0\q\c\k\p\2\4\k\z\l\x\d\c\x\h\2\d\r\4\v\9\c\7\9\a\x\o\x\e\u\v\f\o\v\q\b\3\k\o\0\t\h\8\9\a\n\g\a\o\m\b\v\y\b\u\x\g\e\w\6\w\j\q\p\b\p\3\a\a\f\r\j\5\i\g\h\p\2\i\l\b\p\g\o\o\p\z\n\9\e\x\s\t\o\l\1\e\c\4\f\m\v\v\o\f\t\2\3\n\5\d\g\n\o\l\7\l\u\a\g\4\u\z\s\j\k\w\a\0\n\p\9\v\l\q\c\l\p\3\l\c\i\o\k\v\i\f\9\e\2\t\d\i\v\8\h\b\3\v\3\1\s\c\t\o\3\6\7\u\u\8\8\e\a\6\d\d\2\4\k\l\i\t\0\t\2\9\4\f\s\m\b\9\0\s\9\2\0\s\8\7\z\x\x\3\x\y\c\6\d\9\z\1\k\c\i\1\g\p\v\5\w\d\k\i\s\i\4\3\f\w\a\l\8\x\o\6\c\o\c\m\r\b\5\k\6\j\1\y\5\z\x\6\b\k\9\h\j\z\r\s\o\q\5\j\5\x\7\6\a\a\g\7\f\r\f\4\7\t\8\w\6\o\v\p\5\0\y\x\p\w\w\2\t\i\4\m\8\r\4\3\u\3\h\6\a\w\p\a\n\m\m\x\d\i\2\m\e\m\0\b\2\7\h\0\8\8\9\2\h\o\w\7\1\i\2\n\v\x\6\n\f\i\0\v\m\o\t\t\m\w\i\v\3\i\m\h\j\u\h\x\y\7\7 ]] 00:19:48.681 16:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:48.681 16:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:48.681 [2024-07-22 16:57:50.243998] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:48.681 [2024-07-22 16:57:50.244167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66324 ] 00:19:48.939 [2024-07-22 16:57:50.428151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.196 [2024-07-22 16:57:50.714759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.454 [2024-07-22 16:57:51.024431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:51.103  Copying: 512/512 [B] (average 500 kBps) 00:19:51.103 00:19:51.103 16:57:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ k4fycnfufayiv4cb4lmdbgt3ehg3whj84boxs4nmgo9i14igwtnrvaemf8yurgj2mm6a2et7gul52gg5anxge08ydxime3c126xzmluhguous4fwrrp36pqev3fqhkhuvpr48yipf5ohund5kw0b0qckp24kzlxdcxh2dr4v9c79axoxeuvfovqb3ko0th89angaombvybuxgew6wjqpbp3aafrj5ighp2ilbpgoopzn9exstol1ec4fmvvoft23n5dgnol7luag4uzsjkwa0np9vlqclp3lciokvif9e2tdiv8hb3v31scto367uu88ea6dd24klit0t294fsmb90s920s87zxx3xyc6d9z1kci1gpv5wdkisi43fwal8xo6cocmrb5k6j1y5zx6bk9hjzrsoq5j5x76aag7frf47t8w6ovp50yxpww2ti4m8r43u3h6awpanmmxdi2mem0b27h08892how71i2nvx6nfi0vmottmwiv3imhjuhxy77 == \k\4\f\y\c\n\f\u\f\a\y\i\v\4\c\b\4\l\m\d\b\g\t\3\e\h\g\3\w\h\j\8\4\b\o\x\s\4\n\m\g\o\9\i\1\4\i\g\w\t\n\r\v\a\e\m\f\8\y\u\r\g\j\2\m\m\6\a\2\e\t\7\g\u\l\5\2\g\g\5\a\n\x\g\e\0\8\y\d\x\i\m\e\3\c\1\2\6\x\z\m\l\u\h\g\u\o\u\s\4\f\w\r\r\p\3\6\p\q\e\v\3\f\q\h\k\h\u\v\p\r\4\8\y\i\p\f\5\o\h\u\n\d\5\k\w\0\b\0\q\c\k\p\2\4\k\z\l\x\d\c\x\h\2\d\r\4\v\9\c\7\9\a\x\o\x\e\u\v\f\o\v\q\b\3\k\o\0\t\h\8\9\a\n\g\a\o\m\b\v\y\b\u\x\g\e\w\6\w\j\q\p\b\p\3\a\a\f\r\j\5\i\g\h\p\2\i\l\b\p\g\o\o\p\z\n\9\e\x\s\t\o\l\1\e\c\4\f\m\v\v\o\f\t\2\3\n\5\d\g\n\o\l\7\l\u\a\g\4\u\z\s\j\k\w\a\0\n\p\9\v\l\q\c\l\p\3\l\c\i\o\k\v\i\f\9\e\2\t\d\i\v\8\h\b\3\v\3\1\s\c\t\o\3\6\7\u\u\8\8\e\a\6\d\d\2\4\k\l\i\t\0\t\2\9\4\f\s\m\b\9\0\s\9\2\0\s\8\7\z\x\x\3\x\y\c\6\d\9\z\1\k\c\i\1\g\p\v\5\w\d\k\i\s\i\4\3\f\w\a\l\8\x\o\6\c\o\c\m\r\b\5\k\6\j\1\y\5\z\x\6\b\k\9\h\j\z\r\s\o\q\5\j\5\x\7\6\a\a\g\7\f\r\f\4\7\t\8\w\6\o\v\p\5\0\y\x\p\w\w\2\t\i\4\m\8\r\4\3\u\3\h\6\a\w\p\a\n\m\m\x\d\i\2\m\e\m\0\b\2\7\h\0\8\8\9\2\h\o\w\7\1\i\2\n\v\x\6\n\f\i\0\v\m\o\t\t\m\w\i\v\3\i\m\h\j\u\h\x\y\7\7 ]] 00:19:51.103 16:57:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:51.103 16:57:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:19:51.360 [2024-07-22 16:57:52.836712] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:51.360 [2024-07-22 16:57:52.836889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66362 ] 00:19:51.617 [2024-07-22 16:57:53.021204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.874 [2024-07-22 16:57:53.287513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.133 [2024-07-22 16:57:53.575433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:54.097  Copying: 512/512 [B] (average 166 kBps) 00:19:54.097 00:19:54.097 16:57:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ k4fycnfufayiv4cb4lmdbgt3ehg3whj84boxs4nmgo9i14igwtnrvaemf8yurgj2mm6a2et7gul52gg5anxge08ydxime3c126xzmluhguous4fwrrp36pqev3fqhkhuvpr48yipf5ohund5kw0b0qckp24kzlxdcxh2dr4v9c79axoxeuvfovqb3ko0th89angaombvybuxgew6wjqpbp3aafrj5ighp2ilbpgoopzn9exstol1ec4fmvvoft23n5dgnol7luag4uzsjkwa0np9vlqclp3lciokvif9e2tdiv8hb3v31scto367uu88ea6dd24klit0t294fsmb90s920s87zxx3xyc6d9z1kci1gpv5wdkisi43fwal8xo6cocmrb5k6j1y5zx6bk9hjzrsoq5j5x76aag7frf47t8w6ovp50yxpww2ti4m8r43u3h6awpanmmxdi2mem0b27h08892how71i2nvx6nfi0vmottmwiv3imhjuhxy77 == \k\4\f\y\c\n\f\u\f\a\y\i\v\4\c\b\4\l\m\d\b\g\t\3\e\h\g\3\w\h\j\8\4\b\o\x\s\4\n\m\g\o\9\i\1\4\i\g\w\t\n\r\v\a\e\m\f\8\y\u\r\g\j\2\m\m\6\a\2\e\t\7\g\u\l\5\2\g\g\5\a\n\x\g\e\0\8\y\d\x\i\m\e\3\c\1\2\6\x\z\m\l\u\h\g\u\o\u\s\4\f\w\r\r\p\3\6\p\q\e\v\3\f\q\h\k\h\u\v\p\r\4\8\y\i\p\f\5\o\h\u\n\d\5\k\w\0\b\0\q\c\k\p\2\4\k\z\l\x\d\c\x\h\2\d\r\4\v\9\c\7\9\a\x\o\x\e\u\v\f\o\v\q\b\3\k\o\0\t\h\8\9\a\n\g\a\o\m\b\v\y\b\u\x\g\e\w\6\w\j\q\p\b\p\3\a\a\f\r\j\5\i\g\h\p\2\i\l\b\p\g\o\o\p\z\n\9\e\x\s\t\o\l\1\e\c\4\f\m\v\v\o\f\t\2\3\n\5\d\g\n\o\l\7\l\u\a\g\4\u\z\s\j\k\w\a\0\n\p\9\v\l\q\c\l\p\3\l\c\i\o\k\v\i\f\9\e\2\t\d\i\v\8\h\b\3\v\3\1\s\c\t\o\3\6\7\u\u\8\8\e\a\6\d\d\2\4\k\l\i\t\0\t\2\9\4\f\s\m\b\9\0\s\9\2\0\s\8\7\z\x\x\3\x\y\c\6\d\9\z\1\k\c\i\1\g\p\v\5\w\d\k\i\s\i\4\3\f\w\a\l\8\x\o\6\c\o\c\m\r\b\5\k\6\j\1\y\5\z\x\6\b\k\9\h\j\z\r\s\o\q\5\j\5\x\7\6\a\a\g\7\f\r\f\4\7\t\8\w\6\o\v\p\5\0\y\x\p\w\w\2\t\i\4\m\8\r\4\3\u\3\h\6\a\w\p\a\n\m\m\x\d\i\2\m\e\m\0\b\2\7\h\0\8\8\9\2\h\o\w\7\1\i\2\n\v\x\6\n\f\i\0\v\m\o\t\t\m\w\i\v\3\i\m\h\j\u\h\x\y\7\7 ]] 00:19:54.097 16:57:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:54.097 16:57:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:19:54.097 [2024-07-22 16:57:55.327730] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:54.097 [2024-07-22 16:57:55.327916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66389 ] 00:19:54.097 [2024-07-22 16:57:55.513289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.355 [2024-07-22 16:57:55.769013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.613 [2024-07-22 16:57:56.033411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:56.518  Copying: 512/512 [B] (average 500 kBps) 00:19:56.518 00:19:56.518 ************************************ 00:19:56.518 END TEST dd_flags_misc 00:19:56.518 ************************************ 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ k4fycnfufayiv4cb4lmdbgt3ehg3whj84boxs4nmgo9i14igwtnrvaemf8yurgj2mm6a2et7gul52gg5anxge08ydxime3c126xzmluhguous4fwrrp36pqev3fqhkhuvpr48yipf5ohund5kw0b0qckp24kzlxdcxh2dr4v9c79axoxeuvfovqb3ko0th89angaombvybuxgew6wjqpbp3aafrj5ighp2ilbpgoopzn9exstol1ec4fmvvoft23n5dgnol7luag4uzsjkwa0np9vlqclp3lciokvif9e2tdiv8hb3v31scto367uu88ea6dd24klit0t294fsmb90s920s87zxx3xyc6d9z1kci1gpv5wdkisi43fwal8xo6cocmrb5k6j1y5zx6bk9hjzrsoq5j5x76aag7frf47t8w6ovp50yxpww2ti4m8r43u3h6awpanmmxdi2mem0b27h08892how71i2nvx6nfi0vmottmwiv3imhjuhxy77 == \k\4\f\y\c\n\f\u\f\a\y\i\v\4\c\b\4\l\m\d\b\g\t\3\e\h\g\3\w\h\j\8\4\b\o\x\s\4\n\m\g\o\9\i\1\4\i\g\w\t\n\r\v\a\e\m\f\8\y\u\r\g\j\2\m\m\6\a\2\e\t\7\g\u\l\5\2\g\g\5\a\n\x\g\e\0\8\y\d\x\i\m\e\3\c\1\2\6\x\z\m\l\u\h\g\u\o\u\s\4\f\w\r\r\p\3\6\p\q\e\v\3\f\q\h\k\h\u\v\p\r\4\8\y\i\p\f\5\o\h\u\n\d\5\k\w\0\b\0\q\c\k\p\2\4\k\z\l\x\d\c\x\h\2\d\r\4\v\9\c\7\9\a\x\o\x\e\u\v\f\o\v\q\b\3\k\o\0\t\h\8\9\a\n\g\a\o\m\b\v\y\b\u\x\g\e\w\6\w\j\q\p\b\p\3\a\a\f\r\j\5\i\g\h\p\2\i\l\b\p\g\o\o\p\z\n\9\e\x\s\t\o\l\1\e\c\4\f\m\v\v\o\f\t\2\3\n\5\d\g\n\o\l\7\l\u\a\g\4\u\z\s\j\k\w\a\0\n\p\9\v\l\q\c\l\p\3\l\c\i\o\k\v\i\f\9\e\2\t\d\i\v\8\h\b\3\v\3\1\s\c\t\o\3\6\7\u\u\8\8\e\a\6\d\d\2\4\k\l\i\t\0\t\2\9\4\f\s\m\b\9\0\s\9\2\0\s\8\7\z\x\x\3\x\y\c\6\d\9\z\1\k\c\i\1\g\p\v\5\w\d\k\i\s\i\4\3\f\w\a\l\8\x\o\6\c\o\c\m\r\b\5\k\6\j\1\y\5\z\x\6\b\k\9\h\j\z\r\s\o\q\5\j\5\x\7\6\a\a\g\7\f\r\f\4\7\t\8\w\6\o\v\p\5\0\y\x\p\w\w\2\t\i\4\m\8\r\4\3\u\3\h\6\a\w\p\a\n\m\m\x\d\i\2\m\e\m\0\b\2\7\h\0\8\8\9\2\h\o\w\7\1\i\2\n\v\x\6\n\f\i\0\v\m\o\t\t\m\w\i\v\3\i\m\h\j\u\h\x\y\7\7 ]] 00:19:56.518 00:19:56.518 real 0m19.752s 00:19:56.518 user 0m16.709s 00:19:56.518 sys 0m9.748s 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:19:56.518 * Second test run, disabling liburing, forcing AIO 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:19:56.518 ************************************ 00:19:56.518 START TEST dd_flag_append_forced_aio 00:19:56.518 ************************************ 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=3626i1ogo3ez97a9e8cb0pfs5oneozwf 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=u1mwbo60il1gnnifqsipl73afjdv8tqo 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 3626i1ogo3ez97a9e8cb0pfs5oneozwf 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s u1mwbo60il1gnnifqsipl73afjdv8tqo 00:19:56.518 16:57:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:19:56.518 [2024-07-22 16:57:57.812207] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:56.518 [2024-07-22 16:57:57.812393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66441 ] 00:19:56.518 [2024-07-22 16:57:57.995654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.777 [2024-07-22 16:57:58.339094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.036 [2024-07-22 16:57:58.619264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:58.702  Copying: 32/32 [B] (average 31 kBps) 00:19:58.702 00:19:58.702 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ u1mwbo60il1gnnifqsipl73afjdv8tqo3626i1ogo3ez97a9e8cb0pfs5oneozwf == \u\1\m\w\b\o\6\0\i\l\1\g\n\n\i\f\q\s\i\p\l\7\3\a\f\j\d\v\8\t\q\o\3\6\2\6\i\1\o\g\o\3\e\z\9\7\a\9\e\8\c\b\0\p\f\s\5\o\n\e\o\z\w\f ]] 00:19:58.702 00:19:58.702 real 0m2.608s 00:19:58.702 user 0m2.223s 00:19:58.702 sys 0m0.257s 00:19:58.702 ************************************ 00:19:58.702 END TEST dd_flag_append_forced_aio 00:19:58.702 ************************************ 00:19:58.702 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.702 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:19:58.960 ************************************ 00:19:58.960 START TEST dd_flag_directory_forced_aio 00:19:58.960 ************************************ 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:58.960 16:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:58.960 [2024-07-22 16:58:00.492015] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:58.960 [2024-07-22 16:58:00.492157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66490 ] 00:19:59.227 [2024-07-22 16:58:00.668239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.484 [2024-07-22 16:58:00.935933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.744 [2024-07-22 16:58:01.183898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:59.744 [2024-07-22 16:58:01.318899] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:59.744 [2024-07-22 16:58:01.318954] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:59.744 [2024-07-22 16:58:01.318977] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:00.686 [2024-07-22 16:58:02.279499] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:01.273 16:58:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:20:01.548 [2024-07-22 16:58:02.901972] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:01.548 [2024-07-22 16:58:02.902114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66523 ] 00:20:01.548 [2024-07-22 16:58:03.071726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.820 [2024-07-22 16:58:03.347626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.089 [2024-07-22 16:58:03.627006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:02.351 [2024-07-22 16:58:03.767939] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:20:02.351 [2024-07-22 16:58:03.768001] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:20:02.351 [2024-07-22 16:58:03.768027] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:03.285 [2024-07-22 16:58:04.783389] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:03.852 00:20:03.852 real 0m4.965s 00:20:03.852 user 0m4.221s 00:20:03.852 sys 0m0.513s 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:03.852 ************************************ 00:20:03.852 END TEST dd_flag_directory_forced_aio 00:20:03.852 ************************************ 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:20:03.852 ************************************ 00:20:03.852 START TEST dd_flag_nofollow_forced_aio 00:20:03.852 ************************************ 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:03.852 16:58:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:04.111 [2024-07-22 16:58:05.522406] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:04.111 [2024-07-22 16:58:05.522575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66569 ] 00:20:04.111 [2024-07-22 16:58:05.692882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.368 [2024-07-22 16:58:05.962705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.627 [2024-07-22 16:58:06.241599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:04.886 [2024-07-22 16:58:06.382953] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:20:04.886 [2024-07-22 16:58:06.383027] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:20:04.886 [2024-07-22 16:58:06.383054] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:05.860 [2024-07-22 16:58:07.358874] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.426 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.427 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.427 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.427 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.427 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:06.427 16:58:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:20:06.427 [2024-07-22 16:58:07.984979] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:06.427 [2024-07-22 16:58:07.985129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66606 ] 00:20:06.684 [2024-07-22 16:58:08.156187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.942 [2024-07-22 16:58:08.422259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.201 [2024-07-22 16:58:08.695039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:07.458 [2024-07-22 16:58:08.833141] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:20:07.458 [2024-07-22 16:58:08.833204] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:20:07.458 [2024-07-22 16:58:08.833230] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:08.391 [2024-07-22 16:58:09.773529] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:08.956 16:58:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:08.956 [2024-07-22 16:58:10.375783] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:08.956 [2024-07-22 16:58:10.375920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66632 ] 00:20:08.956 [2024-07-22 16:58:10.541652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.213 [2024-07-22 16:58:10.805073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.471 [2024-07-22 16:58:11.078335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:11.101  Copying: 512/512 [B] (average 500 kBps) 00:20:11.101 00:20:11.101 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ unoizz60kv541ssq9lpojwhvi75qx8mow5sxl5z34njjvf5ln8ybrf3fu4q5yqy53kxe8op6do6am75rngljav8yho2azs5n2qj6kc56zi1rhm2m68umb8rvxsl4bchzevda4iqmgaairzy122y49glk4v11khmp5ijx1pop0wi9201yytfkt876nztl7e5n26r1o0yuexgm6hqx01tys9tiqvkily4dxwu5dnv5qfiwoctwjuglpyeopoaf7hum00vjmf7ka8fjoal6ed98ifp1tgz5q18tfsmafkq7pxfclkoow46tzqiy775bnxc59i39cgpjcadbgghrfp6jag435kegmv9o2avtuq97604pilb6r1xoc11d1131hl47jfsme9b39pxa4v8bz1aq1k59hgdssez3cnh7qu09h0tj4bwd47lum6v2utb5kj2i98xlqebes2olx4mn2l9u2wx5lqildolt3wixnn7us0z3rgboox96bci24lb2vn7q == \u\n\o\i\z\z\6\0\k\v\5\4\1\s\s\q\9\l\p\o\j\w\h\v\i\7\5\q\x\8\m\o\w\5\s\x\l\5\z\3\4\n\j\j\v\f\5\l\n\8\y\b\r\f\3\f\u\4\q\5\y\q\y\5\3\k\x\e\8\o\p\6\d\o\6\a\m\7\5\r\n\g\l\j\a\v\8\y\h\o\2\a\z\s\5\n\2\q\j\6\k\c\5\6\z\i\1\r\h\m\2\m\6\8\u\m\b\8\r\v\x\s\l\4\b\c\h\z\e\v\d\a\4\i\q\m\g\a\a\i\r\z\y\1\2\2\y\4\9\g\l\k\4\v\1\1\k\h\m\p\5\i\j\x\1\p\o\p\0\w\i\9\2\0\1\y\y\t\f\k\t\8\7\6\n\z\t\l\7\e\5\n\2\6\r\1\o\0\y\u\e\x\g\m\6\h\q\x\0\1\t\y\s\9\t\i\q\v\k\i\l\y\4\d\x\w\u\5\d\n\v\5\q\f\i\w\o\c\t\w\j\u\g\l\p\y\e\o\p\o\a\f\7\h\u\m\0\0\v\j\m\f\7\k\a\8\f\j\o\a\l\6\e\d\9\8\i\f\p\1\t\g\z\5\q\1\8\t\f\s\m\a\f\k\q\7\p\x\f\c\l\k\o\o\w\4\6\t\z\q\i\y\7\7\5\b\n\x\c\5\9\i\3\9\c\g\p\j\c\a\d\b\g\g\h\r\f\p\6\j\a\g\4\3\5\k\e\g\m\v\9\o\2\a\v\t\u\q\9\7\6\0\4\p\i\l\b\6\r\1\x\o\c\1\1\d\1\1\3\1\h\l\4\7\j\f\s\m\e\9\b\3\9\p\x\a\4\v\8\b\z\1\a\q\1\k\5\9\h\g\d\s\s\e\z\3\c\n\h\7\q\u\0\9\h\0\t\j\4\b\w\d\4\7\l\u\m\6\v\2\u\t\b\5\k\j\2\i\9\8\x\l\q\e\b\e\s\2\o\l\x\4\m\n\2\l\9\u\2\w\x\5\l\q\i\l\d\o\l\t\3\w\i\x\n\n\7\u\s\0\z\3\r\g\b\o\o\x\9\6\b\c\i\2\4\l\b\2\v\n\7\q ]] 00:20:11.101 00:20:11.101 real 0m7.316s 00:20:11.101 user 0m6.203s 00:20:11.101 sys 0m0.755s 00:20:11.101 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.101 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 ************************************ 00:20:11.101 END TEST dd_flag_nofollow_forced_aio 00:20:11.101 ************************************ 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:20:11.360 ************************************ 00:20:11.360 START TEST dd_flag_noatime_forced_aio 00:20:11.360 ************************************ 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721667491 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721667492 00:20:11.360 16:58:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:20:12.292 16:58:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:12.292 [2024-07-22 16:58:13.908305] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:12.292 [2024-07-22 16:58:13.908495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66696 ] 00:20:12.549 [2024-07-22 16:58:14.087991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.807 [2024-07-22 16:58:14.365823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.066 [2024-07-22 16:58:14.635201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:14.699  Copying: 512/512 [B] (average 500 kBps) 00:20:14.699 00:20:14.699 16:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:14.699 16:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721667491 )) 00:20:14.699 16:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:14.699 16:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721667492 )) 00:20:14.699 16:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:14.958 [2024-07-22 16:58:16.370041] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:14.958 [2024-07-22 16:58:16.370210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66725 ] 00:20:14.958 [2024-07-22 16:58:16.555382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.216 [2024-07-22 16:58:16.811938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.474 [2024-07-22 16:58:17.076628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:17.108  Copying: 512/512 [B] (average 500 kBps) 00:20:17.108 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721667497 )) 00:20:17.109 00:20:17.109 real 0m5.921s 00:20:17.109 user 0m4.119s 00:20:17.109 sys 0m0.550s 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:17.109 ************************************ 00:20:17.109 END TEST dd_flag_noatime_forced_aio 00:20:17.109 ************************************ 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.109 16:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:20:17.367 ************************************ 00:20:17.367 START TEST dd_flags_misc_forced_aio 00:20:17.367 ************************************ 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:17.367 16:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:20:17.367 [2024-07-22 16:58:18.841368] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:17.367 [2024-07-22 16:58:18.841536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66769 ] 00:20:17.625 [2024-07-22 16:58:19.021312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.883 [2024-07-22 16:58:19.278700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.141 [2024-07-22 16:58:19.529884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:19.539  Copying: 512/512 [B] (average 500 kBps) 00:20:19.539 00:20:19.539 16:58:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vie2hpjnlgr9h8dibu34nq1ycrb1mslhhwud0epp6daz5jkjrx402izx47q94g56agquwemf2c6azpvrwaquxrzb1i5m9lmgpvdg41mucap53tz9js0hyt5phmyvx6tn94rvn27t67nzon6c9a4svsp2bmeto9apbw8pmhx7g126jn3ix2ywywn1nvuniwu7poodet9o8dl1xcfxbiea7lq3lr3eudc48oqu4r21i3gl2y41ntvrvf7ff6s877y3mnxzjtnjlehv11sw663yveq9rej2jv615ooda7dkx3zh2sk7swwwzsm9eiqj1dbp3eoif64dkcz193b99i4dyayjs9yboh9r5oepja95ckn8yarvse8dka5dw5gporjij8grwxtpwtqurl8cffldxdaxeifwpuff74f2ej0wovnvqcqr9vyibwdomx4zhpfmjjpv9dwyw5cxd8ygw9r1d1qxrnd415p1518n0xq5gqbl3skra1bqxlr4o0ylvfpc == \v\i\e\2\h\p\j\n\l\g\r\9\h\8\d\i\b\u\3\4\n\q\1\y\c\r\b\1\m\s\l\h\h\w\u\d\0\e\p\p\6\d\a\z\5\j\k\j\r\x\4\0\2\i\z\x\4\7\q\9\4\g\5\6\a\g\q\u\w\e\m\f\2\c\6\a\z\p\v\r\w\a\q\u\x\r\z\b\1\i\5\m\9\l\m\g\p\v\d\g\4\1\m\u\c\a\p\5\3\t\z\9\j\s\0\h\y\t\5\p\h\m\y\v\x\6\t\n\9\4\r\v\n\2\7\t\6\7\n\z\o\n\6\c\9\a\4\s\v\s\p\2\b\m\e\t\o\9\a\p\b\w\8\p\m\h\x\7\g\1\2\6\j\n\3\i\x\2\y\w\y\w\n\1\n\v\u\n\i\w\u\7\p\o\o\d\e\t\9\o\8\d\l\1\x\c\f\x\b\i\e\a\7\l\q\3\l\r\3\e\u\d\c\4\8\o\q\u\4\r\2\1\i\3\g\l\2\y\4\1\n\t\v\r\v\f\7\f\f\6\s\8\7\7\y\3\m\n\x\z\j\t\n\j\l\e\h\v\1\1\s\w\6\6\3\y\v\e\q\9\r\e\j\2\j\v\6\1\5\o\o\d\a\7\d\k\x\3\z\h\2\s\k\7\s\w\w\w\z\s\m\9\e\i\q\j\1\d\b\p\3\e\o\i\f\6\4\d\k\c\z\1\9\3\b\9\9\i\4\d\y\a\y\j\s\9\y\b\o\h\9\r\5\o\e\p\j\a\9\5\c\k\n\8\y\a\r\v\s\e\8\d\k\a\5\d\w\5\g\p\o\r\j\i\j\8\g\r\w\x\t\p\w\t\q\u\r\l\8\c\f\f\l\d\x\d\a\x\e\i\f\w\p\u\f\f\7\4\f\2\e\j\0\w\o\v\n\v\q\c\q\r\9\v\y\i\b\w\d\o\m\x\4\z\h\p\f\m\j\j\p\v\9\d\w\y\w\5\c\x\d\8\y\g\w\9\r\1\d\1\q\x\r\n\d\4\1\5\p\1\5\1\8\n\0\x\q\5\g\q\b\l\3\s\k\r\a\1\b\q\x\l\r\4\o\0\y\l\v\f\p\c ]] 00:20:19.539 16:58:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:19.539 16:58:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:20:19.797 [2024-07-22 16:58:21.256758] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:19.797 [2024-07-22 16:58:21.256933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66801 ] 00:20:20.056 [2024-07-22 16:58:21.442981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.314 [2024-07-22 16:58:21.781693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.573 [2024-07-22 16:58:22.035727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:22.474  Copying: 512/512 [B] (average 500 kBps) 00:20:22.474 00:20:22.474 16:58:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vie2hpjnlgr9h8dibu34nq1ycrb1mslhhwud0epp6daz5jkjrx402izx47q94g56agquwemf2c6azpvrwaquxrzb1i5m9lmgpvdg41mucap53tz9js0hyt5phmyvx6tn94rvn27t67nzon6c9a4svsp2bmeto9apbw8pmhx7g126jn3ix2ywywn1nvuniwu7poodet9o8dl1xcfxbiea7lq3lr3eudc48oqu4r21i3gl2y41ntvrvf7ff6s877y3mnxzjtnjlehv11sw663yveq9rej2jv615ooda7dkx3zh2sk7swwwzsm9eiqj1dbp3eoif64dkcz193b99i4dyayjs9yboh9r5oepja95ckn8yarvse8dka5dw5gporjij8grwxtpwtqurl8cffldxdaxeifwpuff74f2ej0wovnvqcqr9vyibwdomx4zhpfmjjpv9dwyw5cxd8ygw9r1d1qxrnd415p1518n0xq5gqbl3skra1bqxlr4o0ylvfpc == \v\i\e\2\h\p\j\n\l\g\r\9\h\8\d\i\b\u\3\4\n\q\1\y\c\r\b\1\m\s\l\h\h\w\u\d\0\e\p\p\6\d\a\z\5\j\k\j\r\x\4\0\2\i\z\x\4\7\q\9\4\g\5\6\a\g\q\u\w\e\m\f\2\c\6\a\z\p\v\r\w\a\q\u\x\r\z\b\1\i\5\m\9\l\m\g\p\v\d\g\4\1\m\u\c\a\p\5\3\t\z\9\j\s\0\h\y\t\5\p\h\m\y\v\x\6\t\n\9\4\r\v\n\2\7\t\6\7\n\z\o\n\6\c\9\a\4\s\v\s\p\2\b\m\e\t\o\9\a\p\b\w\8\p\m\h\x\7\g\1\2\6\j\n\3\i\x\2\y\w\y\w\n\1\n\v\u\n\i\w\u\7\p\o\o\d\e\t\9\o\8\d\l\1\x\c\f\x\b\i\e\a\7\l\q\3\l\r\3\e\u\d\c\4\8\o\q\u\4\r\2\1\i\3\g\l\2\y\4\1\n\t\v\r\v\f\7\f\f\6\s\8\7\7\y\3\m\n\x\z\j\t\n\j\l\e\h\v\1\1\s\w\6\6\3\y\v\e\q\9\r\e\j\2\j\v\6\1\5\o\o\d\a\7\d\k\x\3\z\h\2\s\k\7\s\w\w\w\z\s\m\9\e\i\q\j\1\d\b\p\3\e\o\i\f\6\4\d\k\c\z\1\9\3\b\9\9\i\4\d\y\a\y\j\s\9\y\b\o\h\9\r\5\o\e\p\j\a\9\5\c\k\n\8\y\a\r\v\s\e\8\d\k\a\5\d\w\5\g\p\o\r\j\i\j\8\g\r\w\x\t\p\w\t\q\u\r\l\8\c\f\f\l\d\x\d\a\x\e\i\f\w\p\u\f\f\7\4\f\2\e\j\0\w\o\v\n\v\q\c\q\r\9\v\y\i\b\w\d\o\m\x\4\z\h\p\f\m\j\j\p\v\9\d\w\y\w\5\c\x\d\8\y\g\w\9\r\1\d\1\q\x\r\n\d\4\1\5\p\1\5\1\8\n\0\x\q\5\g\q\b\l\3\s\k\r\a\1\b\q\x\l\r\4\o\0\y\l\v\f\p\c ]] 00:20:22.474 16:58:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:22.474 16:58:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:20:22.474 [2024-07-22 16:58:23.699638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:22.474 [2024-07-22 16:58:23.699854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66830 ] 00:20:22.474 [2024-07-22 16:58:23.878338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.733 [2024-07-22 16:58:24.118348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.992 [2024-07-22 16:58:24.369794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:24.368  Copying: 512/512 [B] (average 250 kBps) 00:20:24.368 00:20:24.368 16:58:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vie2hpjnlgr9h8dibu34nq1ycrb1mslhhwud0epp6daz5jkjrx402izx47q94g56agquwemf2c6azpvrwaquxrzb1i5m9lmgpvdg41mucap53tz9js0hyt5phmyvx6tn94rvn27t67nzon6c9a4svsp2bmeto9apbw8pmhx7g126jn3ix2ywywn1nvuniwu7poodet9o8dl1xcfxbiea7lq3lr3eudc48oqu4r21i3gl2y41ntvrvf7ff6s877y3mnxzjtnjlehv11sw663yveq9rej2jv615ooda7dkx3zh2sk7swwwzsm9eiqj1dbp3eoif64dkcz193b99i4dyayjs9yboh9r5oepja95ckn8yarvse8dka5dw5gporjij8grwxtpwtqurl8cffldxdaxeifwpuff74f2ej0wovnvqcqr9vyibwdomx4zhpfmjjpv9dwyw5cxd8ygw9r1d1qxrnd415p1518n0xq5gqbl3skra1bqxlr4o0ylvfpc == \v\i\e\2\h\p\j\n\l\g\r\9\h\8\d\i\b\u\3\4\n\q\1\y\c\r\b\1\m\s\l\h\h\w\u\d\0\e\p\p\6\d\a\z\5\j\k\j\r\x\4\0\2\i\z\x\4\7\q\9\4\g\5\6\a\g\q\u\w\e\m\f\2\c\6\a\z\p\v\r\w\a\q\u\x\r\z\b\1\i\5\m\9\l\m\g\p\v\d\g\4\1\m\u\c\a\p\5\3\t\z\9\j\s\0\h\y\t\5\p\h\m\y\v\x\6\t\n\9\4\r\v\n\2\7\t\6\7\n\z\o\n\6\c\9\a\4\s\v\s\p\2\b\m\e\t\o\9\a\p\b\w\8\p\m\h\x\7\g\1\2\6\j\n\3\i\x\2\y\w\y\w\n\1\n\v\u\n\i\w\u\7\p\o\o\d\e\t\9\o\8\d\l\1\x\c\f\x\b\i\e\a\7\l\q\3\l\r\3\e\u\d\c\4\8\o\q\u\4\r\2\1\i\3\g\l\2\y\4\1\n\t\v\r\v\f\7\f\f\6\s\8\7\7\y\3\m\n\x\z\j\t\n\j\l\e\h\v\1\1\s\w\6\6\3\y\v\e\q\9\r\e\j\2\j\v\6\1\5\o\o\d\a\7\d\k\x\3\z\h\2\s\k\7\s\w\w\w\z\s\m\9\e\i\q\j\1\d\b\p\3\e\o\i\f\6\4\d\k\c\z\1\9\3\b\9\9\i\4\d\y\a\y\j\s\9\y\b\o\h\9\r\5\o\e\p\j\a\9\5\c\k\n\8\y\a\r\v\s\e\8\d\k\a\5\d\w\5\g\p\o\r\j\i\j\8\g\r\w\x\t\p\w\t\q\u\r\l\8\c\f\f\l\d\x\d\a\x\e\i\f\w\p\u\f\f\7\4\f\2\e\j\0\w\o\v\n\v\q\c\q\r\9\v\y\i\b\w\d\o\m\x\4\z\h\p\f\m\j\j\p\v\9\d\w\y\w\5\c\x\d\8\y\g\w\9\r\1\d\1\q\x\r\n\d\4\1\5\p\1\5\1\8\n\0\x\q\5\g\q\b\l\3\s\k\r\a\1\b\q\x\l\r\4\o\0\y\l\v\f\p\c ]] 00:20:24.368 16:58:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:24.368 16:58:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:20:24.627 [2024-07-22 16:58:26.048083] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:24.627 [2024-07-22 16:58:26.048292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66857 ] 00:20:24.627 [2024-07-22 16:58:26.239714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.193 [2024-07-22 16:58:26.571310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.452 [2024-07-22 16:58:26.845720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:27.354  Copying: 512/512 [B] (average 500 kBps) 00:20:27.354 00:20:27.354 16:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vie2hpjnlgr9h8dibu34nq1ycrb1mslhhwud0epp6daz5jkjrx402izx47q94g56agquwemf2c6azpvrwaquxrzb1i5m9lmgpvdg41mucap53tz9js0hyt5phmyvx6tn94rvn27t67nzon6c9a4svsp2bmeto9apbw8pmhx7g126jn3ix2ywywn1nvuniwu7poodet9o8dl1xcfxbiea7lq3lr3eudc48oqu4r21i3gl2y41ntvrvf7ff6s877y3mnxzjtnjlehv11sw663yveq9rej2jv615ooda7dkx3zh2sk7swwwzsm9eiqj1dbp3eoif64dkcz193b99i4dyayjs9yboh9r5oepja95ckn8yarvse8dka5dw5gporjij8grwxtpwtqurl8cffldxdaxeifwpuff74f2ej0wovnvqcqr9vyibwdomx4zhpfmjjpv9dwyw5cxd8ygw9r1d1qxrnd415p1518n0xq5gqbl3skra1bqxlr4o0ylvfpc == \v\i\e\2\h\p\j\n\l\g\r\9\h\8\d\i\b\u\3\4\n\q\1\y\c\r\b\1\m\s\l\h\h\w\u\d\0\e\p\p\6\d\a\z\5\j\k\j\r\x\4\0\2\i\z\x\4\7\q\9\4\g\5\6\a\g\q\u\w\e\m\f\2\c\6\a\z\p\v\r\w\a\q\u\x\r\z\b\1\i\5\m\9\l\m\g\p\v\d\g\4\1\m\u\c\a\p\5\3\t\z\9\j\s\0\h\y\t\5\p\h\m\y\v\x\6\t\n\9\4\r\v\n\2\7\t\6\7\n\z\o\n\6\c\9\a\4\s\v\s\p\2\b\m\e\t\o\9\a\p\b\w\8\p\m\h\x\7\g\1\2\6\j\n\3\i\x\2\y\w\y\w\n\1\n\v\u\n\i\w\u\7\p\o\o\d\e\t\9\o\8\d\l\1\x\c\f\x\b\i\e\a\7\l\q\3\l\r\3\e\u\d\c\4\8\o\q\u\4\r\2\1\i\3\g\l\2\y\4\1\n\t\v\r\v\f\7\f\f\6\s\8\7\7\y\3\m\n\x\z\j\t\n\j\l\e\h\v\1\1\s\w\6\6\3\y\v\e\q\9\r\e\j\2\j\v\6\1\5\o\o\d\a\7\d\k\x\3\z\h\2\s\k\7\s\w\w\w\z\s\m\9\e\i\q\j\1\d\b\p\3\e\o\i\f\6\4\d\k\c\z\1\9\3\b\9\9\i\4\d\y\a\y\j\s\9\y\b\o\h\9\r\5\o\e\p\j\a\9\5\c\k\n\8\y\a\r\v\s\e\8\d\k\a\5\d\w\5\g\p\o\r\j\i\j\8\g\r\w\x\t\p\w\t\q\u\r\l\8\c\f\f\l\d\x\d\a\x\e\i\f\w\p\u\f\f\7\4\f\2\e\j\0\w\o\v\n\v\q\c\q\r\9\v\y\i\b\w\d\o\m\x\4\z\h\p\f\m\j\j\p\v\9\d\w\y\w\5\c\x\d\8\y\g\w\9\r\1\d\1\q\x\r\n\d\4\1\5\p\1\5\1\8\n\0\x\q\5\g\q\b\l\3\s\k\r\a\1\b\q\x\l\r\4\o\0\y\l\v\f\p\c ]] 00:20:27.354 16:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:20:27.354 16:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:20:27.354 16:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:20:27.354 16:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:27.354 16:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:27.354 16:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:20:27.354 [2024-07-22 16:58:28.636710] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:27.354 [2024-07-22 16:58:28.636888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66888 ] 00:20:27.354 [2024-07-22 16:58:28.816795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.612 [2024-07-22 16:58:29.072515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.871 [2024-07-22 16:58:29.349242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:29.826  Copying: 512/512 [B] (average 500 kBps) 00:20:29.826 00:20:29.826 16:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5ibc7ce8xeuu1929b7w6abx5blb8myzhsdvmhasxqyvl9acpwqctfq9f5csbre5sgara6o2izcalubhogmp1xsnd4xcpz1zupludmfv8jayq8duu9c53akbaq7j6tnlpmbkrcqoagiy0pl0wy58ir74zoquiuponmnaso9t0gwlmmdu804drznt6uem42siw4cbi5z9js8m5bbtxinpiwg4brvgclw8xng424uzxublnr4x9ruoaz6gk1d6k56afrwa4ve5lh1jtf0g7jayfn84bz8g9rzkzixhe4ixe628zd9ui75gddzajkfri0osbida4mf0cy1t3xr452hs2swqodepqte8p8b10g432vgr3flet7bz1c2u924yudgx4j1hunfpw9nebekpy7fytsqjpwodyc1r8uxccwbf0qpcs56h1erjxdih08iydoeha3x09dm6io44bdipnaaldkpha6me9slbusqakn7qs9apf2hdgsrz5chk2xo3b9sj2 == \5\i\b\c\7\c\e\8\x\e\u\u\1\9\2\9\b\7\w\6\a\b\x\5\b\l\b\8\m\y\z\h\s\d\v\m\h\a\s\x\q\y\v\l\9\a\c\p\w\q\c\t\f\q\9\f\5\c\s\b\r\e\5\s\g\a\r\a\6\o\2\i\z\c\a\l\u\b\h\o\g\m\p\1\x\s\n\d\4\x\c\p\z\1\z\u\p\l\u\d\m\f\v\8\j\a\y\q\8\d\u\u\9\c\5\3\a\k\b\a\q\7\j\6\t\n\l\p\m\b\k\r\c\q\o\a\g\i\y\0\p\l\0\w\y\5\8\i\r\7\4\z\o\q\u\i\u\p\o\n\m\n\a\s\o\9\t\0\g\w\l\m\m\d\u\8\0\4\d\r\z\n\t\6\u\e\m\4\2\s\i\w\4\c\b\i\5\z\9\j\s\8\m\5\b\b\t\x\i\n\p\i\w\g\4\b\r\v\g\c\l\w\8\x\n\g\4\2\4\u\z\x\u\b\l\n\r\4\x\9\r\u\o\a\z\6\g\k\1\d\6\k\5\6\a\f\r\w\a\4\v\e\5\l\h\1\j\t\f\0\g\7\j\a\y\f\n\8\4\b\z\8\g\9\r\z\k\z\i\x\h\e\4\i\x\e\6\2\8\z\d\9\u\i\7\5\g\d\d\z\a\j\k\f\r\i\0\o\s\b\i\d\a\4\m\f\0\c\y\1\t\3\x\r\4\5\2\h\s\2\s\w\q\o\d\e\p\q\t\e\8\p\8\b\1\0\g\4\3\2\v\g\r\3\f\l\e\t\7\b\z\1\c\2\u\9\2\4\y\u\d\g\x\4\j\1\h\u\n\f\p\w\9\n\e\b\e\k\p\y\7\f\y\t\s\q\j\p\w\o\d\y\c\1\r\8\u\x\c\c\w\b\f\0\q\p\c\s\5\6\h\1\e\r\j\x\d\i\h\0\8\i\y\d\o\e\h\a\3\x\0\9\d\m\6\i\o\4\4\b\d\i\p\n\a\a\l\d\k\p\h\a\6\m\e\9\s\l\b\u\s\q\a\k\n\7\q\s\9\a\p\f\2\h\d\g\s\r\z\5\c\h\k\2\x\o\3\b\9\s\j\2 ]] 00:20:29.826 16:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:29.826 16:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:20:29.826 [2024-07-22 16:58:31.084517] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:29.826 [2024-07-22 16:58:31.084699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66913 ] 00:20:29.826 [2024-07-22 16:58:31.274548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.085 [2024-07-22 16:58:31.617417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.344 [2024-07-22 16:58:31.884048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:31.978  Copying: 512/512 [B] (average 500 kBps) 00:20:31.978 00:20:31.978 16:58:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5ibc7ce8xeuu1929b7w6abx5blb8myzhsdvmhasxqyvl9acpwqctfq9f5csbre5sgara6o2izcalubhogmp1xsnd4xcpz1zupludmfv8jayq8duu9c53akbaq7j6tnlpmbkrcqoagiy0pl0wy58ir74zoquiuponmnaso9t0gwlmmdu804drznt6uem42siw4cbi5z9js8m5bbtxinpiwg4brvgclw8xng424uzxublnr4x9ruoaz6gk1d6k56afrwa4ve5lh1jtf0g7jayfn84bz8g9rzkzixhe4ixe628zd9ui75gddzajkfri0osbida4mf0cy1t3xr452hs2swqodepqte8p8b10g432vgr3flet7bz1c2u924yudgx4j1hunfpw9nebekpy7fytsqjpwodyc1r8uxccwbf0qpcs56h1erjxdih08iydoeha3x09dm6io44bdipnaaldkpha6me9slbusqakn7qs9apf2hdgsrz5chk2xo3b9sj2 == \5\i\b\c\7\c\e\8\x\e\u\u\1\9\2\9\b\7\w\6\a\b\x\5\b\l\b\8\m\y\z\h\s\d\v\m\h\a\s\x\q\y\v\l\9\a\c\p\w\q\c\t\f\q\9\f\5\c\s\b\r\e\5\s\g\a\r\a\6\o\2\i\z\c\a\l\u\b\h\o\g\m\p\1\x\s\n\d\4\x\c\p\z\1\z\u\p\l\u\d\m\f\v\8\j\a\y\q\8\d\u\u\9\c\5\3\a\k\b\a\q\7\j\6\t\n\l\p\m\b\k\r\c\q\o\a\g\i\y\0\p\l\0\w\y\5\8\i\r\7\4\z\o\q\u\i\u\p\o\n\m\n\a\s\o\9\t\0\g\w\l\m\m\d\u\8\0\4\d\r\z\n\t\6\u\e\m\4\2\s\i\w\4\c\b\i\5\z\9\j\s\8\m\5\b\b\t\x\i\n\p\i\w\g\4\b\r\v\g\c\l\w\8\x\n\g\4\2\4\u\z\x\u\b\l\n\r\4\x\9\r\u\o\a\z\6\g\k\1\d\6\k\5\6\a\f\r\w\a\4\v\e\5\l\h\1\j\t\f\0\g\7\j\a\y\f\n\8\4\b\z\8\g\9\r\z\k\z\i\x\h\e\4\i\x\e\6\2\8\z\d\9\u\i\7\5\g\d\d\z\a\j\k\f\r\i\0\o\s\b\i\d\a\4\m\f\0\c\y\1\t\3\x\r\4\5\2\h\s\2\s\w\q\o\d\e\p\q\t\e\8\p\8\b\1\0\g\4\3\2\v\g\r\3\f\l\e\t\7\b\z\1\c\2\u\9\2\4\y\u\d\g\x\4\j\1\h\u\n\f\p\w\9\n\e\b\e\k\p\y\7\f\y\t\s\q\j\p\w\o\d\y\c\1\r\8\u\x\c\c\w\b\f\0\q\p\c\s\5\6\h\1\e\r\j\x\d\i\h\0\8\i\y\d\o\e\h\a\3\x\0\9\d\m\6\i\o\4\4\b\d\i\p\n\a\a\l\d\k\p\h\a\6\m\e\9\s\l\b\u\s\q\a\k\n\7\q\s\9\a\p\f\2\h\d\g\s\r\z\5\c\h\k\2\x\o\3\b\9\s\j\2 ]] 00:20:31.978 16:58:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:31.978 16:58:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:20:32.236 [2024-07-22 16:58:33.694821] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:32.236 [2024-07-22 16:58:33.694989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66949 ] 00:20:32.495 [2024-07-22 16:58:33.877805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.754 [2024-07-22 16:58:34.126680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.013 [2024-07-22 16:58:34.393786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:34.389  Copying: 512/512 [B] (average 38 kBps) 00:20:34.389 00:20:34.647 16:58:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5ibc7ce8xeuu1929b7w6abx5blb8myzhsdvmhasxqyvl9acpwqctfq9f5csbre5sgara6o2izcalubhogmp1xsnd4xcpz1zupludmfv8jayq8duu9c53akbaq7j6tnlpmbkrcqoagiy0pl0wy58ir74zoquiuponmnaso9t0gwlmmdu804drznt6uem42siw4cbi5z9js8m5bbtxinpiwg4brvgclw8xng424uzxublnr4x9ruoaz6gk1d6k56afrwa4ve5lh1jtf0g7jayfn84bz8g9rzkzixhe4ixe628zd9ui75gddzajkfri0osbida4mf0cy1t3xr452hs2swqodepqte8p8b10g432vgr3flet7bz1c2u924yudgx4j1hunfpw9nebekpy7fytsqjpwodyc1r8uxccwbf0qpcs56h1erjxdih08iydoeha3x09dm6io44bdipnaaldkpha6me9slbusqakn7qs9apf2hdgsrz5chk2xo3b9sj2 == \5\i\b\c\7\c\e\8\x\e\u\u\1\9\2\9\b\7\w\6\a\b\x\5\b\l\b\8\m\y\z\h\s\d\v\m\h\a\s\x\q\y\v\l\9\a\c\p\w\q\c\t\f\q\9\f\5\c\s\b\r\e\5\s\g\a\r\a\6\o\2\i\z\c\a\l\u\b\h\o\g\m\p\1\x\s\n\d\4\x\c\p\z\1\z\u\p\l\u\d\m\f\v\8\j\a\y\q\8\d\u\u\9\c\5\3\a\k\b\a\q\7\j\6\t\n\l\p\m\b\k\r\c\q\o\a\g\i\y\0\p\l\0\w\y\5\8\i\r\7\4\z\o\q\u\i\u\p\o\n\m\n\a\s\o\9\t\0\g\w\l\m\m\d\u\8\0\4\d\r\z\n\t\6\u\e\m\4\2\s\i\w\4\c\b\i\5\z\9\j\s\8\m\5\b\b\t\x\i\n\p\i\w\g\4\b\r\v\g\c\l\w\8\x\n\g\4\2\4\u\z\x\u\b\l\n\r\4\x\9\r\u\o\a\z\6\g\k\1\d\6\k\5\6\a\f\r\w\a\4\v\e\5\l\h\1\j\t\f\0\g\7\j\a\y\f\n\8\4\b\z\8\g\9\r\z\k\z\i\x\h\e\4\i\x\e\6\2\8\z\d\9\u\i\7\5\g\d\d\z\a\j\k\f\r\i\0\o\s\b\i\d\a\4\m\f\0\c\y\1\t\3\x\r\4\5\2\h\s\2\s\w\q\o\d\e\p\q\t\e\8\p\8\b\1\0\g\4\3\2\v\g\r\3\f\l\e\t\7\b\z\1\c\2\u\9\2\4\y\u\d\g\x\4\j\1\h\u\n\f\p\w\9\n\e\b\e\k\p\y\7\f\y\t\s\q\j\p\w\o\d\y\c\1\r\8\u\x\c\c\w\b\f\0\q\p\c\s\5\6\h\1\e\r\j\x\d\i\h\0\8\i\y\d\o\e\h\a\3\x\0\9\d\m\6\i\o\4\4\b\d\i\p\n\a\a\l\d\k\p\h\a\6\m\e\9\s\l\b\u\s\q\a\k\n\7\q\s\9\a\p\f\2\h\d\g\s\r\z\5\c\h\k\2\x\o\3\b\9\s\j\2 ]] 00:20:34.647 16:58:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:34.647 16:58:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:20:34.648 [2024-07-22 16:58:36.119994] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:34.648 [2024-07-22 16:58:36.120136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66974 ] 00:20:34.906 [2024-07-22 16:58:36.284707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.165 [2024-07-22 16:58:36.614960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.424 [2024-07-22 16:58:36.887278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:37.372  Copying: 512/512 [B] (average 500 kBps) 00:20:37.372 00:20:37.372 ************************************ 00:20:37.372 END TEST dd_flags_misc_forced_aio 00:20:37.372 ************************************ 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5ibc7ce8xeuu1929b7w6abx5blb8myzhsdvmhasxqyvl9acpwqctfq9f5csbre5sgara6o2izcalubhogmp1xsnd4xcpz1zupludmfv8jayq8duu9c53akbaq7j6tnlpmbkrcqoagiy0pl0wy58ir74zoquiuponmnaso9t0gwlmmdu804drznt6uem42siw4cbi5z9js8m5bbtxinpiwg4brvgclw8xng424uzxublnr4x9ruoaz6gk1d6k56afrwa4ve5lh1jtf0g7jayfn84bz8g9rzkzixhe4ixe628zd9ui75gddzajkfri0osbida4mf0cy1t3xr452hs2swqodepqte8p8b10g432vgr3flet7bz1c2u924yudgx4j1hunfpw9nebekpy7fytsqjpwodyc1r8uxccwbf0qpcs56h1erjxdih08iydoeha3x09dm6io44bdipnaaldkpha6me9slbusqakn7qs9apf2hdgsrz5chk2xo3b9sj2 == \5\i\b\c\7\c\e\8\x\e\u\u\1\9\2\9\b\7\w\6\a\b\x\5\b\l\b\8\m\y\z\h\s\d\v\m\h\a\s\x\q\y\v\l\9\a\c\p\w\q\c\t\f\q\9\f\5\c\s\b\r\e\5\s\g\a\r\a\6\o\2\i\z\c\a\l\u\b\h\o\g\m\p\1\x\s\n\d\4\x\c\p\z\1\z\u\p\l\u\d\m\f\v\8\j\a\y\q\8\d\u\u\9\c\5\3\a\k\b\a\q\7\j\6\t\n\l\p\m\b\k\r\c\q\o\a\g\i\y\0\p\l\0\w\y\5\8\i\r\7\4\z\o\q\u\i\u\p\o\n\m\n\a\s\o\9\t\0\g\w\l\m\m\d\u\8\0\4\d\r\z\n\t\6\u\e\m\4\2\s\i\w\4\c\b\i\5\z\9\j\s\8\m\5\b\b\t\x\i\n\p\i\w\g\4\b\r\v\g\c\l\w\8\x\n\g\4\2\4\u\z\x\u\b\l\n\r\4\x\9\r\u\o\a\z\6\g\k\1\d\6\k\5\6\a\f\r\w\a\4\v\e\5\l\h\1\j\t\f\0\g\7\j\a\y\f\n\8\4\b\z\8\g\9\r\z\k\z\i\x\h\e\4\i\x\e\6\2\8\z\d\9\u\i\7\5\g\d\d\z\a\j\k\f\r\i\0\o\s\b\i\d\a\4\m\f\0\c\y\1\t\3\x\r\4\5\2\h\s\2\s\w\q\o\d\e\p\q\t\e\8\p\8\b\1\0\g\4\3\2\v\g\r\3\f\l\e\t\7\b\z\1\c\2\u\9\2\4\y\u\d\g\x\4\j\1\h\u\n\f\p\w\9\n\e\b\e\k\p\y\7\f\y\t\s\q\j\p\w\o\d\y\c\1\r\8\u\x\c\c\w\b\f\0\q\p\c\s\5\6\h\1\e\r\j\x\d\i\h\0\8\i\y\d\o\e\h\a\3\x\0\9\d\m\6\i\o\4\4\b\d\i\p\n\a\a\l\d\k\p\h\a\6\m\e\9\s\l\b\u\s\q\a\k\n\7\q\s\9\a\p\f\2\h\d\g\s\r\z\5\c\h\k\2\x\o\3\b\9\s\j\2 ]] 00:20:37.372 00:20:37.372 real 0m19.811s 00:20:37.372 user 0m16.678s 00:20:37.372 sys 0m2.099s 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:20:37.372 00:20:37.372 real 1m23.248s 00:20:37.372 user 1m8.280s 00:20:37.372 sys 0m20.615s 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.372 16:58:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:20:37.372 ************************************ 00:20:37.372 END TEST spdk_dd_posix 00:20:37.372 ************************************ 00:20:37.372 16:58:38 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:20:37.372 16:58:38 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:20:37.372 16:58:38 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.372 16:58:38 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.372 16:58:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:20:37.372 ************************************ 00:20:37.372 START TEST spdk_dd_malloc 00:20:37.372 ************************************ 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:20:37.372 * Looking for test storage... 00:20:37.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:20:37.372 ************************************ 00:20:37.372 START TEST dd_malloc_copy 00:20:37.372 ************************************ 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:20:37.372 16:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:20:37.372 { 00:20:37.372 "subsystems": [ 00:20:37.372 { 00:20:37.372 "subsystem": "bdev", 00:20:37.372 "config": [ 00:20:37.372 { 00:20:37.372 "params": { 00:20:37.372 "block_size": 512, 00:20:37.372 "num_blocks": 1048576, 00:20:37.372 "name": "malloc0" 00:20:37.372 }, 00:20:37.372 "method": "bdev_malloc_create" 00:20:37.372 }, 00:20:37.372 { 00:20:37.372 "params": { 00:20:37.372 "block_size": 512, 00:20:37.372 "num_blocks": 1048576, 00:20:37.372 "name": "malloc1" 00:20:37.372 }, 00:20:37.372 "method": "bdev_malloc_create" 00:20:37.372 }, 00:20:37.372 { 00:20:37.372 "method": "bdev_wait_for_examine" 00:20:37.372 } 00:20:37.372 ] 00:20:37.372 } 00:20:37.372 ] 00:20:37.372 } 00:20:37.372 [2024-07-22 16:58:38.929381] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:37.373 [2024-07-22 16:58:38.930277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67068 ] 00:20:37.631 [2024-07-22 16:58:39.113279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.890 [2024-07-22 16:58:39.368916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.149 [2024-07-22 16:58:39.629750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:47.537  Copying: 199/512 [MB] (199 MBps) Copying: 393/512 [MB] (194 MBps) Copying: 512/512 [MB] (average 195 MBps) 00:20:47.537 00:20:47.537 16:58:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:20:47.537 16:58:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:20:47.537 16:58:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:20:47.538 16:58:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:20:47.538 { 00:20:47.538 "subsystems": [ 00:20:47.538 { 00:20:47.538 "subsystem": "bdev", 00:20:47.538 "config": [ 00:20:47.538 { 00:20:47.538 "params": { 00:20:47.538 "block_size": 512, 00:20:47.538 "num_blocks": 1048576, 00:20:47.538 "name": "malloc0" 00:20:47.538 }, 00:20:47.538 "method": "bdev_malloc_create" 00:20:47.538 }, 00:20:47.538 { 00:20:47.538 "params": { 00:20:47.538 "block_size": 512, 00:20:47.538 "num_blocks": 1048576, 00:20:47.538 "name": "malloc1" 00:20:47.538 }, 00:20:47.538 "method": "bdev_malloc_create" 00:20:47.538 }, 00:20:47.538 { 00:20:47.538 "method": "bdev_wait_for_examine" 00:20:47.538 } 00:20:47.538 ] 00:20:47.538 } 00:20:47.538 ] 00:20:47.538 } 00:20:47.538 [2024-07-22 16:58:48.480921] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:47.538 [2024-07-22 16:58:48.481100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67178 ] 00:20:47.538 [2024-07-22 16:58:48.662482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.538 [2024-07-22 16:58:48.923635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.796 [2024-07-22 16:58:49.193063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:57.460  Copying: 182/512 [MB] (182 MBps) Copying: 370/512 [MB] (188 MBps) Copying: 512/512 [MB] (average 186 MBps) 00:20:57.460 00:20:57.460 00:20:57.460 real 0m19.434s 00:20:57.460 user 0m18.034s 00:20:57.460 sys 0m1.162s 00:20:57.460 16:58:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.460 16:58:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:20:57.460 ************************************ 00:20:57.460 END TEST dd_malloc_copy 00:20:57.460 ************************************ 00:20:57.460 16:58:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:20:57.460 00:20:57.460 real 0m19.603s 00:20:57.460 user 0m18.094s 00:20:57.460 sys 0m1.278s 00:20:57.460 16:58:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.460 16:58:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:20:57.460 ************************************ 00:20:57.460 END TEST spdk_dd_malloc 00:20:57.460 ************************************ 00:20:57.460 16:58:58 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:20:57.460 16:58:58 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:20:57.460 16:58:58 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:57.460 16:58:58 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.460 16:58:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:20:57.460 ************************************ 00:20:57.460 START TEST spdk_dd_bdev_to_bdev 00:20:57.460 ************************************ 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:20:57.460 * Looking for test storage... 00:20:57.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:20:57.460 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:20:57.461 ************************************ 00:20:57.461 START TEST dd_inflate_file 00:20:57.461 ************************************ 00:20:57.461 16:58:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:20:57.461 [2024-07-22 16:58:58.553322] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:57.461 [2024-07-22 16:58:58.553730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67350 ] 00:20:57.461 [2024-07-22 16:58:58.739079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.461 [2024-07-22 16:58:58.979423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.721 [2024-07-22 16:58:59.246506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:59.355  Copying: 64/64 [MB] (average 1523 MBps) 00:20:59.355 00:20:59.355 00:20:59.355 real 0m2.499s 00:20:59.355 user 0m2.103s 00:20:59.355 sys 0m1.331s 00:20:59.355 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.355 ************************************ 00:20:59.355 END TEST dd_inflate_file 00:20:59.355 ************************************ 00:20:59.355 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.613 16:59:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:20:59.613 ************************************ 00:20:59.613 START TEST dd_copy_to_out_bdev 00:20:59.613 ************************************ 00:20:59.613 16:59:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:20:59.613 { 00:20:59.613 "subsystems": [ 00:20:59.613 { 00:20:59.613 "subsystem": "bdev", 00:20:59.613 "config": [ 00:20:59.613 { 00:20:59.613 "params": { 00:20:59.613 "trtype": "pcie", 00:20:59.613 "traddr": "0000:00:10.0", 00:20:59.613 "name": "Nvme0" 00:20:59.613 }, 00:20:59.613 "method": "bdev_nvme_attach_controller" 00:20:59.613 }, 00:20:59.613 { 00:20:59.613 "params": { 00:20:59.613 "trtype": "pcie", 00:20:59.613 "traddr": "0000:00:11.0", 00:20:59.613 "name": "Nvme1" 00:20:59.613 }, 00:20:59.613 "method": "bdev_nvme_attach_controller" 00:20:59.613 }, 00:20:59.613 { 00:20:59.613 "method": "bdev_wait_for_examine" 00:20:59.613 } 00:20:59.613 ] 00:20:59.613 } 00:20:59.613 ] 00:20:59.613 } 00:20:59.613 [2024-07-22 16:59:01.129369] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:59.613 [2024-07-22 16:59:01.129554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67407 ] 00:20:59.871 [2024-07-22 16:59:01.318440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.129 [2024-07-22 16:59:01.644878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.387 [2024-07-22 16:59:01.906870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:03.136  Copying: 64/64 [MB] (average 71 MBps) 00:21:03.136 00:21:03.136 00:21:03.136 real 0m3.685s 00:21:03.136 user 0m3.293s 00:21:03.136 sys 0m2.217s 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:21:03.136 ************************************ 00:21:03.136 END TEST dd_copy_to_out_bdev 00:21:03.136 ************************************ 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:21:03.136 ************************************ 00:21:03.136 START TEST dd_offset_magic 00:21:03.136 ************************************ 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:21:03.136 16:59:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:21:03.394 { 00:21:03.394 "subsystems": [ 00:21:03.394 { 00:21:03.394 "subsystem": "bdev", 00:21:03.394 "config": [ 00:21:03.394 { 00:21:03.394 "params": { 00:21:03.394 "trtype": "pcie", 00:21:03.394 "traddr": "0000:00:10.0", 00:21:03.394 "name": "Nvme0" 00:21:03.394 }, 00:21:03.394 "method": "bdev_nvme_attach_controller" 00:21:03.394 }, 00:21:03.394 { 00:21:03.394 "params": { 00:21:03.394 "trtype": "pcie", 00:21:03.394 "traddr": "0000:00:11.0", 00:21:03.394 "name": "Nvme1" 00:21:03.394 }, 00:21:03.394 "method": "bdev_nvme_attach_controller" 00:21:03.394 }, 00:21:03.394 { 00:21:03.394 "method": "bdev_wait_for_examine" 00:21:03.394 } 00:21:03.394 ] 00:21:03.394 } 00:21:03.394 ] 00:21:03.394 } 00:21:03.394 [2024-07-22 16:59:04.859289] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:03.394 [2024-07-22 16:59:04.859466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67464 ] 00:21:03.653 [2024-07-22 16:59:05.040965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.911 [2024-07-22 16:59:05.349751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.169 [2024-07-22 16:59:05.636567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:05.802  Copying: 65/65 [MB] (average 1083 MBps) 00:21:05.802 00:21:05.802 16:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:21:05.802 16:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:21:05.802 16:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:21:05.802 16:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:21:05.802 { 00:21:05.802 "subsystems": [ 00:21:05.802 { 00:21:05.802 "subsystem": "bdev", 00:21:05.802 "config": [ 00:21:05.802 { 00:21:05.802 "params": { 00:21:05.802 "trtype": "pcie", 00:21:05.802 "traddr": "0000:00:10.0", 00:21:05.802 "name": "Nvme0" 00:21:05.802 }, 00:21:05.802 "method": "bdev_nvme_attach_controller" 00:21:05.802 }, 00:21:05.802 { 00:21:05.802 "params": { 00:21:05.802 "trtype": "pcie", 00:21:05.802 "traddr": "0000:00:11.0", 00:21:05.802 "name": "Nvme1" 00:21:05.802 }, 00:21:05.802 "method": "bdev_nvme_attach_controller" 00:21:05.802 }, 00:21:05.802 { 00:21:05.802 "method": "bdev_wait_for_examine" 00:21:05.802 } 00:21:05.802 ] 00:21:05.802 } 00:21:05.802 ] 00:21:05.802 } 00:21:05.802 [2024-07-22 16:59:07.393599] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:05.802 [2024-07-22 16:59:07.393730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67507 ] 00:21:06.060 [2024-07-22 16:59:07.556272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.318 [2024-07-22 16:59:07.826673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.576 [2024-07-22 16:59:08.083956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:08.291  Copying: 1024/1024 [kB] (average 1000 MBps) 00:21:08.291 00:21:08.291 16:59:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:21:08.291 16:59:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:21:08.291 16:59:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:21:08.291 16:59:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:21:08.291 16:59:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:21:08.291 16:59:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:21:08.291 16:59:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:21:08.549 { 00:21:08.549 "subsystems": [ 00:21:08.549 { 00:21:08.549 "subsystem": "bdev", 00:21:08.549 "config": [ 00:21:08.549 { 00:21:08.549 "params": { 00:21:08.549 "trtype": "pcie", 00:21:08.549 "traddr": "0000:00:10.0", 00:21:08.549 "name": "Nvme0" 00:21:08.549 }, 00:21:08.549 "method": "bdev_nvme_attach_controller" 00:21:08.549 }, 00:21:08.549 { 00:21:08.549 "params": { 00:21:08.549 "trtype": "pcie", 00:21:08.549 "traddr": "0000:00:11.0", 00:21:08.549 "name": "Nvme1" 00:21:08.549 }, 00:21:08.549 "method": "bdev_nvme_attach_controller" 00:21:08.549 }, 00:21:08.549 { 00:21:08.549 "method": "bdev_wait_for_examine" 00:21:08.549 } 00:21:08.549 ] 00:21:08.549 } 00:21:08.549 ] 00:21:08.549 } 00:21:08.549 [2024-07-22 16:59:09.959300] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:08.549 [2024-07-22 16:59:09.959436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67541 ] 00:21:08.549 [2024-07-22 16:59:10.128437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.808 [2024-07-22 16:59:10.383348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.067 [2024-07-22 16:59:10.638970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:11.008  Copying: 65/65 [MB] (average 1048 MBps) 00:21:11.008 00:21:11.008 16:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:21:11.008 16:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:21:11.008 16:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:21:11.008 16:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:21:11.008 { 00:21:11.008 "subsystems": [ 00:21:11.008 { 00:21:11.008 "subsystem": "bdev", 00:21:11.008 "config": [ 00:21:11.008 { 00:21:11.008 "params": { 00:21:11.008 "trtype": "pcie", 00:21:11.008 "traddr": "0000:00:10.0", 00:21:11.008 "name": "Nvme0" 00:21:11.008 }, 00:21:11.008 "method": "bdev_nvme_attach_controller" 00:21:11.008 }, 00:21:11.008 { 00:21:11.008 "params": { 00:21:11.008 "trtype": "pcie", 00:21:11.008 "traddr": "0000:00:11.0", 00:21:11.008 "name": "Nvme1" 00:21:11.008 }, 00:21:11.008 "method": "bdev_nvme_attach_controller" 00:21:11.009 }, 00:21:11.009 { 00:21:11.009 "method": "bdev_wait_for_examine" 00:21:11.009 } 00:21:11.009 ] 00:21:11.009 } 00:21:11.009 ] 00:21:11.009 } 00:21:11.009 [2024-07-22 16:59:12.318356] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:11.009 [2024-07-22 16:59:12.318513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67579 ] 00:21:11.009 [2024-07-22 16:59:12.491420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.267 [2024-07-22 16:59:12.825410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.542 [2024-07-22 16:59:13.107426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:13.716  Copying: 1024/1024 [kB] (average 500 MBps) 00:21:13.716 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:21:13.716 00:21:13.716 real 0m10.143s 00:21:13.716 user 0m8.793s 00:21:13.716 sys 0m3.062s 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.716 ************************************ 00:21:13.716 END TEST dd_offset_magic 00:21:13.716 ************************************ 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:21:13.716 16:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:21:13.716 { 00:21:13.717 "subsystems": [ 00:21:13.717 { 00:21:13.717 "subsystem": "bdev", 00:21:13.717 "config": [ 00:21:13.717 { 00:21:13.717 "params": { 00:21:13.717 "trtype": "pcie", 00:21:13.717 "traddr": "0000:00:10.0", 00:21:13.717 "name": "Nvme0" 00:21:13.717 }, 00:21:13.717 "method": "bdev_nvme_attach_controller" 00:21:13.717 }, 00:21:13.717 { 00:21:13.717 "params": { 00:21:13.717 "trtype": "pcie", 00:21:13.717 "traddr": "0000:00:11.0", 00:21:13.717 "name": "Nvme1" 00:21:13.717 }, 00:21:13.717 "method": "bdev_nvme_attach_controller" 00:21:13.717 }, 00:21:13.717 { 00:21:13.717 "method": "bdev_wait_for_examine" 00:21:13.717 } 00:21:13.717 ] 00:21:13.717 } 00:21:13.717 ] 00:21:13.717 } 00:21:13.717 [2024-07-22 16:59:15.028682] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:13.717 [2024-07-22 16:59:15.028835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67633 ] 00:21:13.717 [2024-07-22 16:59:15.204598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.974 [2024-07-22 16:59:15.537582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.232 [2024-07-22 16:59:15.810269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:15.871  Copying: 5120/5120 [kB] (average 1000 MBps) 00:21:15.871 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:21:15.871 16:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:21:15.871 { 00:21:15.871 "subsystems": [ 00:21:15.871 { 00:21:15.871 "subsystem": "bdev", 00:21:15.871 "config": [ 00:21:15.871 { 00:21:15.871 "params": { 00:21:15.871 "trtype": "pcie", 00:21:15.871 "traddr": "0000:00:10.0", 00:21:15.871 "name": "Nvme0" 00:21:15.871 }, 00:21:15.871 "method": "bdev_nvme_attach_controller" 00:21:15.871 }, 00:21:15.871 { 00:21:15.871 "params": { 00:21:15.871 "trtype": "pcie", 00:21:15.871 "traddr": "0000:00:11.0", 00:21:15.871 "name": "Nvme1" 00:21:15.871 }, 00:21:15.871 "method": "bdev_nvme_attach_controller" 00:21:15.871 }, 00:21:15.871 { 00:21:15.871 "method": "bdev_wait_for_examine" 00:21:15.871 } 00:21:15.871 ] 00:21:15.871 } 00:21:15.871 ] 00:21:15.871 } 00:21:15.871 [2024-07-22 16:59:17.438161] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:15.871 [2024-07-22 16:59:17.438387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67672 ] 00:21:16.130 [2024-07-22 16:59:17.619154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.387 [2024-07-22 16:59:17.871018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.645 [2024-07-22 16:59:18.136684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:18.279  Copying: 5120/5120 [kB] (average 833 MBps) 00:21:18.279 00:21:18.537 16:59:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:21:18.537 ************************************ 00:21:18.537 END TEST spdk_dd_bdev_to_bdev 00:21:18.537 ************************************ 00:21:18.537 00:21:18.537 real 0m21.603s 00:21:18.537 user 0m18.581s 00:21:18.537 sys 0m9.005s 00:21:18.537 16:59:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:18.538 16:59:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:21:18.538 16:59:19 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:21:18.538 16:59:19 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:21:18.538 16:59:19 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:21:18.538 16:59:19 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:18.538 16:59:19 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.538 16:59:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:21:18.538 ************************************ 00:21:18.538 START TEST spdk_dd_uring 00:21:18.538 ************************************ 00:21:18.538 16:59:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:21:18.538 * Looking for test storage... 00:21:18.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:21:18.538 ************************************ 00:21:18.538 START TEST dd_uring_copy 00:21:18.538 ************************************ 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=yotil9w29odxllfou9cenno6dy4ej7nihn61f4jy5mxxnfu938i5baj2j15egycptx029hci0vxtxm5v0l90cq12k3e4ga843l68ilwv1ml8on5nbtps52g9mc865p7rmdt2eko0z0ytdjnry7w37hbvmx8tqmfxkna4tb15bjqv1piz3eu58hm5ud7wkkh6wxr7re19vdrf75zqd6gbjf0v9v0hhv1ner7sc8n28j9vxrwosmwm9d06jps2n3ardp42husoqde4mx1w7r2hpdde8uc33fn68gh8hw8q5nx1epxhp376p9o23yrahtufh2xoccawdlxbez7h5ouw1egod17t2kk0w78fajlxgojqqdu8zww0iwik934u247xq21kmtsldaqanl0e145po8i7wg8bup2jux5n99dhkmkx1bhtli703r0y8l5sglw513fxuxpqmh5btut7c9vfbbvug0f2deh1tjirugu958qgw2a6vxtcxmekdewhocryi433evsl5vkjzungozzw3ud47pnrzg23wjhju9vk0xabox3ehx61ctjvy173x8it666u73612cutbm82juq96d85bgvrno6viib4f333k5nbr2bq9p59uee4oym92curkussx54ecgx4xkyrj48gd9zrt4mnr36qffe8ol1s67k5yvd15i2y3ur4sy1lrlwqm0be8a27c0zzykfn4ur3gqyie1kjypg9aq4jtlrrii99i982o1g5k5veigk58oz2shvu1yqm3yuzgm1zrin6nkqxpv01ocnzxmfmznbelx32ltjtthp37371zav17kw3kjpkufrejbmau0b7w1awhq4cozdk7duqstxx4b6p1v5xd6n2qzt2zky95gfblcz73g0xumfbomph9126snwkcbm1awugye15hkvcyzf06uce3jp4fpp13llhpqkko1s214szug6ncl2xjwqn0w7srlt8h5wwxwbll9d3ozk3qvwmeqkpu6qb17z8x04gslk9 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo yotil9w29odxllfou9cenno6dy4ej7nihn61f4jy5mxxnfu938i5baj2j15egycptx029hci0vxtxm5v0l90cq12k3e4ga843l68ilwv1ml8on5nbtps52g9mc865p7rmdt2eko0z0ytdjnry7w37hbvmx8tqmfxkna4tb15bjqv1piz3eu58hm5ud7wkkh6wxr7re19vdrf75zqd6gbjf0v9v0hhv1ner7sc8n28j9vxrwosmwm9d06jps2n3ardp42husoqde4mx1w7r2hpdde8uc33fn68gh8hw8q5nx1epxhp376p9o23yrahtufh2xoccawdlxbez7h5ouw1egod17t2kk0w78fajlxgojqqdu8zww0iwik934u247xq21kmtsldaqanl0e145po8i7wg8bup2jux5n99dhkmkx1bhtli703r0y8l5sglw513fxuxpqmh5btut7c9vfbbvug0f2deh1tjirugu958qgw2a6vxtcxmekdewhocryi433evsl5vkjzungozzw3ud47pnrzg23wjhju9vk0xabox3ehx61ctjvy173x8it666u73612cutbm82juq96d85bgvrno6viib4f333k5nbr2bq9p59uee4oym92curkussx54ecgx4xkyrj48gd9zrt4mnr36qffe8ol1s67k5yvd15i2y3ur4sy1lrlwqm0be8a27c0zzykfn4ur3gqyie1kjypg9aq4jtlrrii99i982o1g5k5veigk58oz2shvu1yqm3yuzgm1zrin6nkqxpv01ocnzxmfmznbelx32ltjtthp37371zav17kw3kjpkufrejbmau0b7w1awhq4cozdk7duqstxx4b6p1v5xd6n2qzt2zky95gfblcz73g0xumfbomph9126snwkcbm1awugye15hkvcyzf06uce3jp4fpp13llhpqkko1s214szug6ncl2xjwqn0w7srlt8h5wwxwbll9d3ozk3qvwmeqkpu6qb17z8x04gslk9 00:21:18.538 16:59:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:21:18.797 [2024-07-22 16:59:20.269215] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:18.797 [2024-07-22 16:59:20.269417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67754 ] 00:21:19.053 [2024-07-22 16:59:20.458836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.311 [2024-07-22 16:59:20.794018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.570 [2024-07-22 16:59:21.041522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:24.250  Copying: 511/511 [MB] (average 1199 MBps) 00:21:24.250 00:21:24.250 16:59:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:21:24.250 16:59:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:21:24.250 16:59:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:21:24.250 16:59:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:21:24.250 { 00:21:24.250 "subsystems": [ 00:21:24.250 { 00:21:24.250 "subsystem": "bdev", 00:21:24.250 "config": [ 00:21:24.250 { 00:21:24.250 "params": { 00:21:24.250 "block_size": 512, 00:21:24.250 "num_blocks": 1048576, 00:21:24.250 "name": "malloc0" 00:21:24.250 }, 00:21:24.250 "method": "bdev_malloc_create" 00:21:24.250 }, 00:21:24.250 { 00:21:24.250 "params": { 00:21:24.250 "filename": "/dev/zram1", 00:21:24.250 "name": "uring0" 00:21:24.250 }, 00:21:24.250 "method": "bdev_uring_create" 00:21:24.250 }, 00:21:24.250 { 00:21:24.250 "method": "bdev_wait_for_examine" 00:21:24.250 } 00:21:24.250 ] 00:21:24.250 } 00:21:24.250 ] 00:21:24.250 } 00:21:24.250 [2024-07-22 16:59:25.646723] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:24.250 [2024-07-22 16:59:25.646892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67821 ] 00:21:24.250 [2024-07-22 16:59:25.831752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.508 [2024-07-22 16:59:26.090709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.766 [2024-07-22 16:59:26.358123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:31.601  Copying: 181/512 [MB] (181 MBps) Copying: 386/512 [MB] (204 MBps) Copying: 512/512 [MB] (average 196 MBps) 00:21:31.601 00:21:31.601 16:59:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:21:31.601 16:59:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:21:31.601 16:59:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:21:31.601 16:59:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:21:31.601 { 00:21:31.601 "subsystems": [ 00:21:31.601 { 00:21:31.601 "subsystem": "bdev", 00:21:31.601 "config": [ 00:21:31.601 { 00:21:31.601 "params": { 00:21:31.601 "block_size": 512, 00:21:31.601 "num_blocks": 1048576, 00:21:31.601 "name": "malloc0" 00:21:31.601 }, 00:21:31.601 "method": "bdev_malloc_create" 00:21:31.601 }, 00:21:31.601 { 00:21:31.601 "params": { 00:21:31.601 "filename": "/dev/zram1", 00:21:31.601 "name": "uring0" 00:21:31.601 }, 00:21:31.601 "method": "bdev_uring_create" 00:21:31.601 }, 00:21:31.601 { 00:21:31.601 "method": "bdev_wait_for_examine" 00:21:31.601 } 00:21:31.601 ] 00:21:31.601 } 00:21:31.601 ] 00:21:31.601 } 00:21:31.601 [2024-07-22 16:59:33.194875] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:31.601 [2024-07-22 16:59:33.195044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67918 ] 00:21:31.858 [2024-07-22 16:59:33.382395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.115 [2024-07-22 16:59:33.718240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.684 [2024-07-22 16:59:34.000402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:39.779  Copying: 169/512 [MB] (169 MBps) Copying: 333/512 [MB] (163 MBps) Copying: 505/512 [MB] (172 MBps) Copying: 512/512 [MB] (average 168 MBps) 00:21:39.779 00:21:39.779 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:21:39.779 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ yotil9w29odxllfou9cenno6dy4ej7nihn61f4jy5mxxnfu938i5baj2j15egycptx029hci0vxtxm5v0l90cq12k3e4ga843l68ilwv1ml8on5nbtps52g9mc865p7rmdt2eko0z0ytdjnry7w37hbvmx8tqmfxkna4tb15bjqv1piz3eu58hm5ud7wkkh6wxr7re19vdrf75zqd6gbjf0v9v0hhv1ner7sc8n28j9vxrwosmwm9d06jps2n3ardp42husoqde4mx1w7r2hpdde8uc33fn68gh8hw8q5nx1epxhp376p9o23yrahtufh2xoccawdlxbez7h5ouw1egod17t2kk0w78fajlxgojqqdu8zww0iwik934u247xq21kmtsldaqanl0e145po8i7wg8bup2jux5n99dhkmkx1bhtli703r0y8l5sglw513fxuxpqmh5btut7c9vfbbvug0f2deh1tjirugu958qgw2a6vxtcxmekdewhocryi433evsl5vkjzungozzw3ud47pnrzg23wjhju9vk0xabox3ehx61ctjvy173x8it666u73612cutbm82juq96d85bgvrno6viib4f333k5nbr2bq9p59uee4oym92curkussx54ecgx4xkyrj48gd9zrt4mnr36qffe8ol1s67k5yvd15i2y3ur4sy1lrlwqm0be8a27c0zzykfn4ur3gqyie1kjypg9aq4jtlrrii99i982o1g5k5veigk58oz2shvu1yqm3yuzgm1zrin6nkqxpv01ocnzxmfmznbelx32ltjtthp37371zav17kw3kjpkufrejbmau0b7w1awhq4cozdk7duqstxx4b6p1v5xd6n2qzt2zky95gfblcz73g0xumfbomph9126snwkcbm1awugye15hkvcyzf06uce3jp4fpp13llhpqkko1s214szug6ncl2xjwqn0w7srlt8h5wwxwbll9d3ozk3qvwmeqkpu6qb17z8x04gslk9 == \y\o\t\i\l\9\w\2\9\o\d\x\l\l\f\o\u\9\c\e\n\n\o\6\d\y\4\e\j\7\n\i\h\n\6\1\f\4\j\y\5\m\x\x\n\f\u\9\3\8\i\5\b\a\j\2\j\1\5\e\g\y\c\p\t\x\0\2\9\h\c\i\0\v\x\t\x\m\5\v\0\l\9\0\c\q\1\2\k\3\e\4\g\a\8\4\3\l\6\8\i\l\w\v\1\m\l\8\o\n\5\n\b\t\p\s\5\2\g\9\m\c\8\6\5\p\7\r\m\d\t\2\e\k\o\0\z\0\y\t\d\j\n\r\y\7\w\3\7\h\b\v\m\x\8\t\q\m\f\x\k\n\a\4\t\b\1\5\b\j\q\v\1\p\i\z\3\e\u\5\8\h\m\5\u\d\7\w\k\k\h\6\w\x\r\7\r\e\1\9\v\d\r\f\7\5\z\q\d\6\g\b\j\f\0\v\9\v\0\h\h\v\1\n\e\r\7\s\c\8\n\2\8\j\9\v\x\r\w\o\s\m\w\m\9\d\0\6\j\p\s\2\n\3\a\r\d\p\4\2\h\u\s\o\q\d\e\4\m\x\1\w\7\r\2\h\p\d\d\e\8\u\c\3\3\f\n\6\8\g\h\8\h\w\8\q\5\n\x\1\e\p\x\h\p\3\7\6\p\9\o\2\3\y\r\a\h\t\u\f\h\2\x\o\c\c\a\w\d\l\x\b\e\z\7\h\5\o\u\w\1\e\g\o\d\1\7\t\2\k\k\0\w\7\8\f\a\j\l\x\g\o\j\q\q\d\u\8\z\w\w\0\i\w\i\k\9\3\4\u\2\4\7\x\q\2\1\k\m\t\s\l\d\a\q\a\n\l\0\e\1\4\5\p\o\8\i\7\w\g\8\b\u\p\2\j\u\x\5\n\9\9\d\h\k\m\k\x\1\b\h\t\l\i\7\0\3\r\0\y\8\l\5\s\g\l\w\5\1\3\f\x\u\x\p\q\m\h\5\b\t\u\t\7\c\9\v\f\b\b\v\u\g\0\f\2\d\e\h\1\t\j\i\r\u\g\u\9\5\8\q\g\w\2\a\6\v\x\t\c\x\m\e\k\d\e\w\h\o\c\r\y\i\4\3\3\e\v\s\l\5\v\k\j\z\u\n\g\o\z\z\w\3\u\d\4\7\p\n\r\z\g\2\3\w\j\h\j\u\9\v\k\0\x\a\b\o\x\3\e\h\x\6\1\c\t\j\v\y\1\7\3\x\8\i\t\6\6\6\u\7\3\6\1\2\c\u\t\b\m\8\2\j\u\q\9\6\d\8\5\b\g\v\r\n\o\6\v\i\i\b\4\f\3\3\3\k\5\n\b\r\2\b\q\9\p\5\9\u\e\e\4\o\y\m\9\2\c\u\r\k\u\s\s\x\5\4\e\c\g\x\4\x\k\y\r\j\4\8\g\d\9\z\r\t\4\m\n\r\3\6\q\f\f\e\8\o\l\1\s\6\7\k\5\y\v\d\1\5\i\2\y\3\u\r\4\s\y\1\l\r\l\w\q\m\0\b\e\8\a\2\7\c\0\z\z\y\k\f\n\4\u\r\3\g\q\y\i\e\1\k\j\y\p\g\9\a\q\4\j\t\l\r\r\i\i\9\9\i\9\8\2\o\1\g\5\k\5\v\e\i\g\k\5\8\o\z\2\s\h\v\u\1\y\q\m\3\y\u\z\g\m\1\z\r\i\n\6\n\k\q\x\p\v\0\1\o\c\n\z\x\m\f\m\z\n\b\e\l\x\3\2\l\t\j\t\t\h\p\3\7\3\7\1\z\a\v\1\7\k\w\3\k\j\p\k\u\f\r\e\j\b\m\a\u\0\b\7\w\1\a\w\h\q\4\c\o\z\d\k\7\d\u\q\s\t\x\x\4\b\6\p\1\v\5\x\d\6\n\2\q\z\t\2\z\k\y\9\5\g\f\b\l\c\z\7\3\g\0\x\u\m\f\b\o\m\p\h\9\1\2\6\s\n\w\k\c\b\m\1\a\w\u\g\y\e\1\5\h\k\v\c\y\z\f\0\6\u\c\e\3\j\p\4\f\p\p\1\3\l\l\h\p\q\k\k\o\1\s\2\1\4\s\z\u\g\6\n\c\l\2\x\j\w\q\n\0\w\7\s\r\l\t\8\h\5\w\w\x\w\b\l\l\9\d\3\o\z\k\3\q\v\w\m\e\q\k\p\u\6\q\b\1\7\z\8\x\0\4\g\s\l\k\9 ]] 00:21:39.779 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:21:39.779 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ yotil9w29odxllfou9cenno6dy4ej7nihn61f4jy5mxxnfu938i5baj2j15egycptx029hci0vxtxm5v0l90cq12k3e4ga843l68ilwv1ml8on5nbtps52g9mc865p7rmdt2eko0z0ytdjnry7w37hbvmx8tqmfxkna4tb15bjqv1piz3eu58hm5ud7wkkh6wxr7re19vdrf75zqd6gbjf0v9v0hhv1ner7sc8n28j9vxrwosmwm9d06jps2n3ardp42husoqde4mx1w7r2hpdde8uc33fn68gh8hw8q5nx1epxhp376p9o23yrahtufh2xoccawdlxbez7h5ouw1egod17t2kk0w78fajlxgojqqdu8zww0iwik934u247xq21kmtsldaqanl0e145po8i7wg8bup2jux5n99dhkmkx1bhtli703r0y8l5sglw513fxuxpqmh5btut7c9vfbbvug0f2deh1tjirugu958qgw2a6vxtcxmekdewhocryi433evsl5vkjzungozzw3ud47pnrzg23wjhju9vk0xabox3ehx61ctjvy173x8it666u73612cutbm82juq96d85bgvrno6viib4f333k5nbr2bq9p59uee4oym92curkussx54ecgx4xkyrj48gd9zrt4mnr36qffe8ol1s67k5yvd15i2y3ur4sy1lrlwqm0be8a27c0zzykfn4ur3gqyie1kjypg9aq4jtlrrii99i982o1g5k5veigk58oz2shvu1yqm3yuzgm1zrin6nkqxpv01ocnzxmfmznbelx32ltjtthp37371zav17kw3kjpkufrejbmau0b7w1awhq4cozdk7duqstxx4b6p1v5xd6n2qzt2zky95gfblcz73g0xumfbomph9126snwkcbm1awugye15hkvcyzf06uce3jp4fpp13llhpqkko1s214szug6ncl2xjwqn0w7srlt8h5wwxwbll9d3ozk3qvwmeqkpu6qb17z8x04gslk9 == \y\o\t\i\l\9\w\2\9\o\d\x\l\l\f\o\u\9\c\e\n\n\o\6\d\y\4\e\j\7\n\i\h\n\6\1\f\4\j\y\5\m\x\x\n\f\u\9\3\8\i\5\b\a\j\2\j\1\5\e\g\y\c\p\t\x\0\2\9\h\c\i\0\v\x\t\x\m\5\v\0\l\9\0\c\q\1\2\k\3\e\4\g\a\8\4\3\l\6\8\i\l\w\v\1\m\l\8\o\n\5\n\b\t\p\s\5\2\g\9\m\c\8\6\5\p\7\r\m\d\t\2\e\k\o\0\z\0\y\t\d\j\n\r\y\7\w\3\7\h\b\v\m\x\8\t\q\m\f\x\k\n\a\4\t\b\1\5\b\j\q\v\1\p\i\z\3\e\u\5\8\h\m\5\u\d\7\w\k\k\h\6\w\x\r\7\r\e\1\9\v\d\r\f\7\5\z\q\d\6\g\b\j\f\0\v\9\v\0\h\h\v\1\n\e\r\7\s\c\8\n\2\8\j\9\v\x\r\w\o\s\m\w\m\9\d\0\6\j\p\s\2\n\3\a\r\d\p\4\2\h\u\s\o\q\d\e\4\m\x\1\w\7\r\2\h\p\d\d\e\8\u\c\3\3\f\n\6\8\g\h\8\h\w\8\q\5\n\x\1\e\p\x\h\p\3\7\6\p\9\o\2\3\y\r\a\h\t\u\f\h\2\x\o\c\c\a\w\d\l\x\b\e\z\7\h\5\o\u\w\1\e\g\o\d\1\7\t\2\k\k\0\w\7\8\f\a\j\l\x\g\o\j\q\q\d\u\8\z\w\w\0\i\w\i\k\9\3\4\u\2\4\7\x\q\2\1\k\m\t\s\l\d\a\q\a\n\l\0\e\1\4\5\p\o\8\i\7\w\g\8\b\u\p\2\j\u\x\5\n\9\9\d\h\k\m\k\x\1\b\h\t\l\i\7\0\3\r\0\y\8\l\5\s\g\l\w\5\1\3\f\x\u\x\p\q\m\h\5\b\t\u\t\7\c\9\v\f\b\b\v\u\g\0\f\2\d\e\h\1\t\j\i\r\u\g\u\9\5\8\q\g\w\2\a\6\v\x\t\c\x\m\e\k\d\e\w\h\o\c\r\y\i\4\3\3\e\v\s\l\5\v\k\j\z\u\n\g\o\z\z\w\3\u\d\4\7\p\n\r\z\g\2\3\w\j\h\j\u\9\v\k\0\x\a\b\o\x\3\e\h\x\6\1\c\t\j\v\y\1\7\3\x\8\i\t\6\6\6\u\7\3\6\1\2\c\u\t\b\m\8\2\j\u\q\9\6\d\8\5\b\g\v\r\n\o\6\v\i\i\b\4\f\3\3\3\k\5\n\b\r\2\b\q\9\p\5\9\u\e\e\4\o\y\m\9\2\c\u\r\k\u\s\s\x\5\4\e\c\g\x\4\x\k\y\r\j\4\8\g\d\9\z\r\t\4\m\n\r\3\6\q\f\f\e\8\o\l\1\s\6\7\k\5\y\v\d\1\5\i\2\y\3\u\r\4\s\y\1\l\r\l\w\q\m\0\b\e\8\a\2\7\c\0\z\z\y\k\f\n\4\u\r\3\g\q\y\i\e\1\k\j\y\p\g\9\a\q\4\j\t\l\r\r\i\i\9\9\i\9\8\2\o\1\g\5\k\5\v\e\i\g\k\5\8\o\z\2\s\h\v\u\1\y\q\m\3\y\u\z\g\m\1\z\r\i\n\6\n\k\q\x\p\v\0\1\o\c\n\z\x\m\f\m\z\n\b\e\l\x\3\2\l\t\j\t\t\h\p\3\7\3\7\1\z\a\v\1\7\k\w\3\k\j\p\k\u\f\r\e\j\b\m\a\u\0\b\7\w\1\a\w\h\q\4\c\o\z\d\k\7\d\u\q\s\t\x\x\4\b\6\p\1\v\5\x\d\6\n\2\q\z\t\2\z\k\y\9\5\g\f\b\l\c\z\7\3\g\0\x\u\m\f\b\o\m\p\h\9\1\2\6\s\n\w\k\c\b\m\1\a\w\u\g\y\e\1\5\h\k\v\c\y\z\f\0\6\u\c\e\3\j\p\4\f\p\p\1\3\l\l\h\p\q\k\k\o\1\s\2\1\4\s\z\u\g\6\n\c\l\2\x\j\w\q\n\0\w\7\s\r\l\t\8\h\5\w\w\x\w\b\l\l\9\d\3\o\z\k\3\q\v\w\m\e\q\k\p\u\6\q\b\1\7\z\8\x\0\4\g\s\l\k\9 ]] 00:21:39.779 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:21:40.365 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:21:40.365 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:21:40.365 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:21:40.365 16:59:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:21:40.365 { 00:21:40.365 "subsystems": [ 00:21:40.365 { 00:21:40.365 "subsystem": "bdev", 00:21:40.365 "config": [ 00:21:40.365 { 00:21:40.365 "params": { 00:21:40.365 "block_size": 512, 00:21:40.365 "num_blocks": 1048576, 00:21:40.365 "name": "malloc0" 00:21:40.365 }, 00:21:40.365 "method": "bdev_malloc_create" 00:21:40.365 }, 00:21:40.365 { 00:21:40.365 "params": { 00:21:40.365 "filename": "/dev/zram1", 00:21:40.365 "name": "uring0" 00:21:40.365 }, 00:21:40.365 "method": "bdev_uring_create" 00:21:40.365 }, 00:21:40.365 { 00:21:40.365 "method": "bdev_wait_for_examine" 00:21:40.365 } 00:21:40.365 ] 00:21:40.365 } 00:21:40.365 ] 00:21:40.365 } 00:21:40.365 [2024-07-22 16:59:41.795641] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:40.365 [2024-07-22 16:59:41.795858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68042 ] 00:21:40.623 [2024-07-22 16:59:41.984307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.882 [2024-07-22 16:59:42.315038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.140 [2024-07-22 16:59:42.595366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:48.829  Copying: 147/512 [MB] (147 MBps) Copying: 297/512 [MB] (150 MBps) Copying: 443/512 [MB] (146 MBps) Copying: 512/512 [MB] (average 147 MBps) 00:21:48.829 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:21:48.829 16:59:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:21:48.829 { 00:21:48.829 "subsystems": [ 00:21:48.829 { 00:21:48.829 "subsystem": "bdev", 00:21:48.829 "config": [ 00:21:48.829 { 00:21:48.829 "params": { 00:21:48.829 "block_size": 512, 00:21:48.829 "num_blocks": 1048576, 00:21:48.829 "name": "malloc0" 00:21:48.829 }, 00:21:48.829 "method": "bdev_malloc_create" 00:21:48.829 }, 00:21:48.829 { 00:21:48.829 "params": { 00:21:48.829 "filename": "/dev/zram1", 00:21:48.829 "name": "uring0" 00:21:48.829 }, 00:21:48.829 "method": "bdev_uring_create" 00:21:48.829 }, 00:21:48.829 { 00:21:48.829 "params": { 00:21:48.829 "name": "uring0" 00:21:48.829 }, 00:21:48.829 "method": "bdev_uring_delete" 00:21:48.829 }, 00:21:48.829 { 00:21:48.829 "method": "bdev_wait_for_examine" 00:21:48.829 } 00:21:48.829 ] 00:21:48.829 } 00:21:48.829 ] 00:21:48.829 } 00:21:48.829 [2024-07-22 16:59:50.207918] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:48.829 [2024-07-22 16:59:50.208062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68150 ] 00:21:48.829 [2024-07-22 16:59:50.371049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.087 [2024-07-22 16:59:50.629160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.346 [2024-07-22 16:59:50.899125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:53.565  Copying: 0/0 [B] (average 0 Bps) 00:21:53.565 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:53.565 16:59:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:21:53.565 { 00:21:53.565 "subsystems": [ 00:21:53.565 { 00:21:53.565 "subsystem": "bdev", 00:21:53.565 "config": [ 00:21:53.565 { 00:21:53.565 "params": { 00:21:53.565 "block_size": 512, 00:21:53.565 "num_blocks": 1048576, 00:21:53.565 "name": "malloc0" 00:21:53.565 }, 00:21:53.565 "method": "bdev_malloc_create" 00:21:53.565 }, 00:21:53.565 { 00:21:53.565 "params": { 00:21:53.565 "filename": "/dev/zram1", 00:21:53.565 "name": "uring0" 00:21:53.565 }, 00:21:53.565 "method": "bdev_uring_create" 00:21:53.565 }, 00:21:53.565 { 00:21:53.565 "params": { 00:21:53.565 "name": "uring0" 00:21:53.565 }, 00:21:53.565 "method": "bdev_uring_delete" 00:21:53.565 }, 00:21:53.565 { 00:21:53.565 "method": "bdev_wait_for_examine" 00:21:53.565 } 00:21:53.565 ] 00:21:53.565 } 00:21:53.565 ] 00:21:53.565 } 00:21:53.565 [2024-07-22 16:59:54.851500] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:53.565 [2024-07-22 16:59:54.851686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68214 ] 00:21:53.565 [2024-07-22 16:59:55.035415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.823 [2024-07-22 16:59:55.296278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.080 [2024-07-22 16:59:55.568394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:55.013 [2024-07-22 16:59:56.443894] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:21:55.013 [2024-07-22 16:59:56.443951] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:21:55.013 [2024-07-22 16:59:56.443969] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:21:55.013 [2024-07-22 16:59:56.443988] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:57.541 [2024-07-22 16:59:59.094553] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:21:58.108 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:21:58.366 00:21:58.366 real 0m39.803s 00:21:58.366 user 0m33.851s 00:21:58.366 sys 0m16.694s 00:21:58.366 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.366 16:59:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:21:58.366 ************************************ 00:21:58.366 END TEST dd_uring_copy 00:21:58.366 ************************************ 00:21:58.366 16:59:59 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:21:58.366 00:21:58.366 real 0m39.970s 00:21:58.366 user 0m33.930s 00:21:58.366 sys 0m16.783s 00:21:58.366 16:59:59 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.366 16:59:59 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:21:58.366 ************************************ 00:21:58.366 END TEST spdk_dd_uring 00:21:58.366 ************************************ 00:21:58.625 16:59:59 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:21:58.625 16:59:59 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:21:58.625 16:59:59 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:58.625 16:59:59 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.625 16:59:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:21:58.625 ************************************ 00:21:58.625 START TEST spdk_dd_sparse 00:21:58.625 ************************************ 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:21:58.625 * Looking for test storage... 00:21:58.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:21:58.625 1+0 records in 00:21:58.625 1+0 records out 00:21:58.625 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00706433 s, 594 MB/s 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:21:58.625 1+0 records in 00:21:58.625 1+0 records out 00:21:58.625 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00726015 s, 578 MB/s 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:21:58.625 1+0 records in 00:21:58.625 1+0 records out 00:21:58.625 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00437419 s, 959 MB/s 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:21:58.625 ************************************ 00:21:58.625 START TEST dd_sparse_file_to_file 00:21:58.625 ************************************ 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:21:58.625 17:00:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:21:58.625 { 00:21:58.625 "subsystems": [ 00:21:58.625 { 00:21:58.625 "subsystem": "bdev", 00:21:58.625 "config": [ 00:21:58.625 { 00:21:58.625 "params": { 00:21:58.625 "block_size": 4096, 00:21:58.625 "filename": "dd_sparse_aio_disk", 00:21:58.625 "name": "dd_aio" 00:21:58.625 }, 00:21:58.625 "method": "bdev_aio_create" 00:21:58.625 }, 00:21:58.625 { 00:21:58.625 "params": { 00:21:58.625 "lvs_name": "dd_lvstore", 00:21:58.625 "bdev_name": "dd_aio" 00:21:58.625 }, 00:21:58.625 "method": "bdev_lvol_create_lvstore" 00:21:58.625 }, 00:21:58.625 { 00:21:58.625 "method": "bdev_wait_for_examine" 00:21:58.625 } 00:21:58.625 ] 00:21:58.625 } 00:21:58.625 ] 00:21:58.625 } 00:21:58.883 [2024-07-22 17:00:00.278794] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:58.883 [2024-07-22 17:00:00.278966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68344 ] 00:21:58.883 [2024-07-22 17:00:00.466521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.448 [2024-07-22 17:00:00.812320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.705 [2024-07-22 17:00:01.095004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:01.331  Copying: 12/36 [MB] (average 923 MBps) 00:22:01.331 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:22:01.587 00:22:01.587 real 0m2.844s 00:22:01.587 user 0m2.422s 00:22:01.587 sys 0m1.373s 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.587 17:00:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.587 ************************************ 00:22:01.587 END TEST dd_sparse_file_to_file 00:22:01.587 ************************************ 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:22:01.587 ************************************ 00:22:01.587 START TEST dd_sparse_file_to_bdev 00:22:01.587 ************************************ 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:22:01.587 17:00:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:22:01.587 { 00:22:01.587 "subsystems": [ 00:22:01.587 { 00:22:01.587 "subsystem": "bdev", 00:22:01.587 "config": [ 00:22:01.587 { 00:22:01.587 "params": { 00:22:01.587 "block_size": 4096, 00:22:01.587 "filename": "dd_sparse_aio_disk", 00:22:01.587 "name": "dd_aio" 00:22:01.587 }, 00:22:01.587 "method": "bdev_aio_create" 00:22:01.587 }, 00:22:01.587 { 00:22:01.587 "params": { 00:22:01.587 "lvs_name": "dd_lvstore", 00:22:01.587 "lvol_name": "dd_lvol", 00:22:01.587 "size_in_mib": 36, 00:22:01.587 "thin_provision": true 00:22:01.587 }, 00:22:01.587 "method": "bdev_lvol_create" 00:22:01.587 }, 00:22:01.587 { 00:22:01.587 "method": "bdev_wait_for_examine" 00:22:01.587 } 00:22:01.587 ] 00:22:01.587 } 00:22:01.587 ] 00:22:01.587 } 00:22:01.587 [2024-07-22 17:00:03.148731] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:01.587 [2024-07-22 17:00:03.148867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68410 ] 00:22:01.845 [2024-07-22 17:00:03.313137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.104 [2024-07-22 17:00:03.611769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.362 [2024-07-22 17:00:03.879610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:04.544  Copying: 12/36 [MB] (average 521 MBps) 00:22:04.544 00:22:04.544 00:22:04.544 real 0m2.682s 00:22:04.544 user 0m2.299s 00:22:04.544 sys 0m1.328s 00:22:04.544 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.544 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:22:04.544 ************************************ 00:22:04.545 END TEST dd_sparse_file_to_bdev 00:22:04.545 ************************************ 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:22:04.545 ************************************ 00:22:04.545 START TEST dd_sparse_bdev_to_file 00:22:04.545 ************************************ 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:22:04.545 17:00:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:22:04.545 { 00:22:04.545 "subsystems": [ 00:22:04.545 { 00:22:04.545 "subsystem": "bdev", 00:22:04.545 "config": [ 00:22:04.545 { 00:22:04.545 "params": { 00:22:04.545 "block_size": 4096, 00:22:04.545 "filename": "dd_sparse_aio_disk", 00:22:04.545 "name": "dd_aio" 00:22:04.545 }, 00:22:04.545 "method": "bdev_aio_create" 00:22:04.545 }, 00:22:04.545 { 00:22:04.545 "method": "bdev_wait_for_examine" 00:22:04.545 } 00:22:04.545 ] 00:22:04.545 } 00:22:04.545 ] 00:22:04.545 } 00:22:04.545 [2024-07-22 17:00:05.907266] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:04.545 [2024-07-22 17:00:05.907474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68471 ] 00:22:04.545 [2024-07-22 17:00:06.086169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.803 [2024-07-22 17:00:06.351312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.061 [2024-07-22 17:00:06.637765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:07.222  Copying: 12/36 [MB] (average 800 MBps) 00:22:07.222 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:22:07.222 00:22:07.222 real 0m2.714s 00:22:07.222 user 0m2.321s 00:22:07.222 sys 0m1.331s 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:22:07.222 ************************************ 00:22:07.222 END TEST dd_sparse_bdev_to_file 00:22:07.222 ************************************ 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:22:07.222 ************************************ 00:22:07.222 END TEST spdk_dd_sparse 00:22:07.222 ************************************ 00:22:07.222 00:22:07.222 real 0m8.546s 00:22:07.222 user 0m7.145s 00:22:07.222 sys 0m4.232s 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.222 17:00:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:22:07.222 17:00:08 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:22:07.222 17:00:08 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:22:07.222 17:00:08 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:07.222 17:00:08 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.222 17:00:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:22:07.222 ************************************ 00:22:07.222 START TEST spdk_dd_negative 00:22:07.222 ************************************ 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:22:07.223 * Looking for test storage... 00:22:07.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:07.223 ************************************ 00:22:07.223 START TEST dd_invalid_arguments 00:22:07.223 ************************************ 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:07.223 17:00:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:22:07.223 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:22:07.223 00:22:07.223 CPU options: 00:22:07.223 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:22:07.223 (like [0,1,10]) 00:22:07.223 --lcores lcore to CPU mapping list. The list is in the format: 00:22:07.223 [<,lcores[@CPUs]>...] 00:22:07.223 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:22:07.223 Within the group, '-' is used for range separator, 00:22:07.223 ',' is used for single number separator. 00:22:07.223 '( )' can be omitted for single element group, 00:22:07.223 '@' can be omitted if cpus and lcores have the same value 00:22:07.223 --disable-cpumask-locks Disable CPU core lock files. 00:22:07.223 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:22:07.223 pollers in the app support interrupt mode) 00:22:07.223 -p, --main-core main (primary) core for DPDK 00:22:07.223 00:22:07.223 Configuration options: 00:22:07.223 -c, --config, --json JSON config file 00:22:07.223 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:22:07.223 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:22:07.223 --wait-for-rpc wait for RPCs to initialize subsystems 00:22:07.223 --rpcs-allowed comma-separated list of permitted RPCS 00:22:07.223 --json-ignore-init-errors don't exit on invalid config entry 00:22:07.223 00:22:07.223 Memory options: 00:22:07.223 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:22:07.223 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:22:07.223 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:22:07.223 -R, --huge-unlink unlink huge files after initialization 00:22:07.223 -n, --mem-channels number of memory channels used for DPDK 00:22:07.223 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:22:07.223 --msg-mempool-size global message memory pool size in count (default: 262143) 00:22:07.223 --no-huge run without using hugepages 00:22:07.223 -i, --shm-id shared memory ID (optional) 00:22:07.223 -g, --single-file-segments force creating just one hugetlbfs file 00:22:07.223 00:22:07.223 PCI options: 00:22:07.223 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:22:07.223 -B, --pci-blocked pci addr to block (can be used more than once) 00:22:07.223 -u, --no-pci disable PCI access 00:22:07.223 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:22:07.223 00:22:07.223 Log options: 00:22:07.223 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:22:07.223 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:22:07.223 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:22:07.223 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:22:07.223 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:22:07.223 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:22:07.223 nvme_auth, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, 00:22:07.223 sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, 00:22:07.223 vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 00:22:07.223 vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, 00:22:07.223 vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, 00:22:07.223 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:22:07.223 --silence-noticelog disable notice level logging to stderr 00:22:07.223 00:22:07.223 Trace options: 00:22:07.223 --num-trace-entries number of trace entries for each core, must be power of 2, 00:22:07.223 setting 0 to disable trace (default 32768) 00:22:07.223 Tracepoints vary in size and can use more than one trace entry. 00:22:07.223 -e, --tpoint-group [: 128 )) 00:22:07.490 17:00:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.490 17:00:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.490 00:22:07.490 real 0m0.154s 00:22:07.490 user 0m0.084s 00:22:07.490 sys 0m0.068s 00:22:07.490 17:00:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.490 17:00:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:22:07.490 ************************************ 00:22:07.490 END TEST dd_double_input 00:22:07.490 ************************************ 00:22:07.758 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:07.758 17:00:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:22:07.758 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:07.759 ************************************ 00:22:07.759 START TEST dd_double_output 00:22:07.759 ************************************ 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:22:07.759 [2024-07-22 17:00:09.239233] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.759 00:22:07.759 real 0m0.154s 00:22:07.759 user 0m0.076s 00:22:07.759 sys 0m0.076s 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:22:07.759 ************************************ 00:22:07.759 END TEST dd_double_output 00:22:07.759 ************************************ 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:07.759 ************************************ 00:22:07.759 START TEST dd_no_input 00:22:07.759 ************************************ 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:07.759 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:22:08.028 [2024-07-22 17:00:09.444633] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.028 00:22:08.028 real 0m0.148s 00:22:08.028 user 0m0.075s 00:22:08.028 sys 0m0.072s 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:22:08.028 ************************************ 00:22:08.028 END TEST dd_no_input 00:22:08.028 ************************************ 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:08.028 ************************************ 00:22:08.028 START TEST dd_no_output 00:22:08.028 ************************************ 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:08.028 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:08.288 [2024-07-22 17:00:09.649357] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.288 00:22:08.288 real 0m0.151s 00:22:08.288 user 0m0.076s 00:22:08.288 sys 0m0.074s 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.288 ************************************ 00:22:08.288 END TEST dd_no_output 00:22:08.288 ************************************ 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:08.288 ************************************ 00:22:08.288 START TEST dd_wrong_blocksize 00:22:08.288 ************************************ 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:08.288 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:22:08.288 [2024-07-22 17:00:09.884589] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:22:08.552 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:22:08.552 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.552 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.553 00:22:08.553 real 0m0.195s 00:22:08.553 user 0m0.096s 00:22:08.553 sys 0m0.097s 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.553 ************************************ 00:22:08.553 END TEST dd_wrong_blocksize 00:22:08.553 ************************************ 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.553 17:00:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:08.553 ************************************ 00:22:08.553 START TEST dd_smaller_blocksize 00:22:08.553 ************************************ 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:08.553 17:00:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:22:08.553 [2024-07-22 17:00:10.134487] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:08.553 [2024-07-22 17:00:10.134693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68723 ] 00:22:08.816 [2024-07-22 17:00:10.312581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.081 [2024-07-22 17:00:10.646622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.400 [2024-07-22 17:00:10.916137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:09.995 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:22:09.995 [2024-07-22 17:00:11.511922] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:22:09.995 [2024-07-22 17:00:11.512042] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:10.965 [2024-07-22 17:00:12.525519] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.547 00:22:11.547 real 0m3.067s 00:22:11.547 user 0m2.370s 00:22:11.547 sys 0m0.577s 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:22:11.547 ************************************ 00:22:11.547 END TEST dd_smaller_blocksize 00:22:11.547 ************************************ 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:11.547 ************************************ 00:22:11.547 START TEST dd_invalid_count 00:22:11.547 ************************************ 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:11.547 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:22:11.814 [2024-07-22 17:00:13.244205] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.814 00:22:11.814 real 0m0.190s 00:22:11.814 user 0m0.091s 00:22:11.814 sys 0m0.095s 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.814 ************************************ 00:22:11.814 END TEST dd_invalid_count 00:22:11.814 ************************************ 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:11.814 ************************************ 00:22:11.814 START TEST dd_invalid_oflag 00:22:11.814 ************************************ 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.814 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.815 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.815 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:11.815 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:22:12.080 [2024-07-22 17:00:13.463218] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:22:12.080 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.081 00:22:12.081 real 0m0.167s 00:22:12.081 user 0m0.077s 00:22:12.081 sys 0m0.088s 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:12.081 ************************************ 00:22:12.081 END TEST dd_invalid_oflag 00:22:12.081 ************************************ 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:12.081 ************************************ 00:22:12.081 START TEST dd_invalid_iflag 00:22:12.081 ************************************ 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:12.081 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:22:12.081 [2024-07-22 17:00:13.695933] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:22:12.349 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:22:12.349 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.350 00:22:12.350 real 0m0.187s 00:22:12.350 user 0m0.102s 00:22:12.350 sys 0m0.083s 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:22:12.350 ************************************ 00:22:12.350 END TEST dd_invalid_iflag 00:22:12.350 ************************************ 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:12.350 ************************************ 00:22:12.350 START TEST dd_unknown_flag 00:22:12.350 ************************************ 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:12.350 17:00:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:22:12.350 [2024-07-22 17:00:13.943090] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:12.350 [2024-07-22 17:00:13.943286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68848 ] 00:22:12.621 [2024-07-22 17:00:14.127372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.925 [2024-07-22 17:00:14.464666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.209 [2024-07-22 17:00:14.763187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:13.476 [2024-07-22 17:00:14.906049] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:22:13.476 [2024-07-22 17:00:14.906121] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:13.476 [2024-07-22 17:00:14.906192] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:22:13.476 [2024-07-22 17:00:14.906208] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:13.476 [2024-07-22 17:00:14.906493] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:22:13.476 [2024-07-22 17:00:14.906516] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:13.476 [2024-07-22 17:00:14.906596] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:22:13.476 [2024-07-22 17:00:14.906609] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:22:14.408 [2024-07-22 17:00:15.890844] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:14.984 00:22:14.984 real 0m2.599s 00:22:14.984 user 0m2.206s 00:22:14.984 sys 0m0.281s 00:22:14.984 ************************************ 00:22:14.984 END TEST dd_unknown_flag 00:22:14.984 ************************************ 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:14.984 ************************************ 00:22:14.984 START TEST dd_invalid_json 00:22:14.984 ************************************ 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:14.984 17:00:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:22:14.984 [2024-07-22 17:00:16.595019] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:14.984 [2024-07-22 17:00:16.595201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68894 ] 00:22:15.246 [2024-07-22 17:00:16.772200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.510 [2024-07-22 17:00:17.028496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.510 [2024-07-22 17:00:17.028609] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:22:15.510 [2024-07-22 17:00:17.028634] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:15.510 [2024-07-22 17:00:17.028653] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:15.510 [2024-07-22 17:00:17.028723] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.076 00:22:16.076 real 0m1.073s 00:22:16.076 user 0m0.791s 00:22:16.076 sys 0m0.173s 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.076 ************************************ 00:22:16.076 END TEST dd_invalid_json 00:22:16.076 ************************************ 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:22:16.076 00:22:16.076 real 0m8.985s 00:22:16.076 user 0m6.391s 00:22:16.076 sys 0m2.227s 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.076 ************************************ 00:22:16.076 END TEST spdk_dd_negative 00:22:16.076 ************************************ 00:22:16.076 17:00:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:22:16.076 17:00:17 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:22:16.076 00:22:16.076 real 4m2.663s 00:22:16.076 user 3m23.969s 00:22:16.076 sys 1m19.619s 00:22:16.076 17:00:17 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.076 17:00:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:22:16.076 ************************************ 00:22:16.076 END TEST spdk_dd 00:22:16.076 ************************************ 00:22:16.076 17:00:17 -- common/autotest_common.sh@1142 -- # return 0 00:22:16.076 17:00:17 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:22:16.076 17:00:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:16.076 17:00:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:16.076 17:00:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.076 17:00:17 -- common/autotest_common.sh@10 -- # set +x 00:22:16.335 17:00:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:16.335 17:00:17 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:22:16.335 17:00:17 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:22:16.335 17:00:17 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:22:16.335 17:00:17 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:22:16.335 17:00:17 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:22:16.335 17:00:17 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:22:16.335 17:00:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.335 17:00:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.335 17:00:17 -- common/autotest_common.sh@10 -- # set +x 00:22:16.335 ************************************ 00:22:16.335 START TEST nvmf_tcp 00:22:16.335 ************************************ 00:22:16.335 17:00:17 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:22:16.335 * Looking for test storage... 00:22:16.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:16.335 17:00:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:22:16.335 17:00:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:22:16.335 17:00:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:22:16.335 17:00:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.335 17:00:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.335 17:00:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.335 ************************************ 00:22:16.335 START TEST nvmf_target_core 00:22:16.335 ************************************ 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:22:16.335 * Looking for test storage... 00:22:16.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.335 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:16.595 ************************************ 00:22:16.595 START TEST nvmf_host_management 00:22:16.595 ************************************ 00:22:16.595 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:22:16.595 * Looking for test storage... 00:22:16.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.595 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:16.596 Cannot find device "nvmf_init_br" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:16.596 Cannot find device "nvmf_tgt_br" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.596 Cannot find device "nvmf_tgt_br2" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:16.596 Cannot find device "nvmf_init_br" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:16.596 Cannot find device "nvmf_tgt_br" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:16.596 Cannot find device "nvmf_tgt_br2" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:16.596 Cannot find device "nvmf_br" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:16.596 Cannot find device "nvmf_init_if" 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:22:16.596 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:16.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:16.856 00:22:16.856 --- 10.0.0.2 ping statistics --- 00:22:16.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.856 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:16.856 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:17.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:17.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:22:17.114 00:22:17.114 --- 10.0.0.3 ping statistics --- 00:22:17.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.114 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:17.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:17.114 00:22:17.114 --- 10.0.0.1 ping statistics --- 00:22:17.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.114 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=69191 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:17.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 69191 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 69191 ']' 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.114 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.115 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.115 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.115 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:17.115 [2024-07-22 17:00:18.611627] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:17.115 [2024-07-22 17:00:18.612020] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.373 [2024-07-22 17:00:18.794011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.632 [2024-07-22 17:00:19.119932] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.632 [2024-07-22 17:00:19.120003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.632 [2024-07-22 17:00:19.120019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.632 [2024-07-22 17:00:19.120037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.632 [2024-07-22 17:00:19.120053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.632 [2024-07-22 17:00:19.120876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.632 [2024-07-22 17:00:19.120980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.632 [2024-07-22 17:00:19.121064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.632 [2024-07-22 17:00:19.121074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:17.890 [2024-07-22 17:00:19.405958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.148 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:18.148 [2024-07-22 17:00:19.648710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.149 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:18.406 Malloc0 00:22:18.406 [2024-07-22 17:00:19.798479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=69251 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 69251 /var/tmp/bdevperf.sock 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 69251 ']' 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:22:18.406 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:18.407 { 00:22:18.407 "params": { 00:22:18.407 "name": "Nvme$subsystem", 00:22:18.407 "trtype": "$TEST_TRANSPORT", 00:22:18.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.407 "adrfam": "ipv4", 00:22:18.407 "trsvcid": "$NVMF_PORT", 00:22:18.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.407 "hdgst": ${hdgst:-false}, 00:22:18.407 "ddgst": ${ddgst:-false} 00:22:18.407 }, 00:22:18.407 "method": "bdev_nvme_attach_controller" 00:22:18.407 } 00:22:18.407 EOF 00:22:18.407 )") 00:22:18.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:22:18.407 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:18.407 "params": { 00:22:18.407 "name": "Nvme0", 00:22:18.407 "trtype": "tcp", 00:22:18.407 "traddr": "10.0.0.2", 00:22:18.407 "adrfam": "ipv4", 00:22:18.407 "trsvcid": "4420", 00:22:18.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:18.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:18.407 "hdgst": false, 00:22:18.407 "ddgst": false 00:22:18.407 }, 00:22:18.407 "method": "bdev_nvme_attach_controller" 00:22:18.407 }' 00:22:18.407 [2024-07-22 17:00:19.966836] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:18.407 [2024-07-22 17:00:19.967213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69251 ] 00:22:18.665 [2024-07-22 17:00:20.136692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.924 [2024-07-22 17:00:20.417009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.182 [2024-07-22 17:00:20.698370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:19.440 Running I/O for 10 seconds... 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:19.440 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:22:19.440 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.440 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:22:19.440 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:22:19.440 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:22:19.698 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:22:19.698 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:22:19.698 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:22:19.698 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:22:19.698 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.698 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.958 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:22:19.958 [2024-07-22 17:00:21.348913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.958 [2024-07-22 17:00:21.349936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.958 [2024-07-22 17:00:21.349960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.349980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.350953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.350978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.959 [2024-07-22 17:00:21.351643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.959 [2024-07-22 17:00:21.351678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.351723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.351771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.351862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.351893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.351923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.351970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.351994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.960 [2024-07-22 17:00:21.352014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.352036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:22:19.960 [2024-07-22 17:00:21.352433] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:22:19.960 [2024-07-22 17:00:21.352635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.960 [2024-07-22 17:00:21.352676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.352704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.960 [2024-07-22 17:00:21.352727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.352762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.960 [2024-07-22 17:00:21.352784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.352806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.960 [2024-07-22 17:00:21.352828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.960 [2024-07-22 17:00:21.352848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:22:19.960 task offset: 73728 on job bdev=Nvme0n1 fails 00:22:19.960 00:22:19.960 Latency(us) 00:22:19.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.960 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:19.960 Job: Nvme0n1 ended in about 0.42 seconds with error 00:22:19.960 Verification LBA range: start 0x0 length 0x400 00:22:19.960 Nvme0n1 : 0.42 1375.00 85.94 152.78 0.00 40574.49 4618.73 39945.75 00:22:19.960 =================================================================================================================== 00:22:19.960 Total : 1375.00 85.94 152.78 0.00 40574.49 4618.73 39945.75 00:22:19.960 [2024-07-22 17:00:21.354165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:19.960 [2024-07-22 17:00:21.360467] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:19.960 [2024-07-22 17:00:21.360667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:19.960 [2024-07-22 17:00:21.372669] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 69251 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.893 { 00:22:20.893 "params": { 00:22:20.893 "name": "Nvme$subsystem", 00:22:20.893 "trtype": "$TEST_TRANSPORT", 00:22:20.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.893 "adrfam": "ipv4", 00:22:20.893 "trsvcid": "$NVMF_PORT", 00:22:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.893 "hdgst": ${hdgst:-false}, 00:22:20.893 "ddgst": ${ddgst:-false} 00:22:20.893 }, 00:22:20.893 "method": "bdev_nvme_attach_controller" 00:22:20.893 } 00:22:20.893 EOF 00:22:20.893 )") 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:22:20.893 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:20.893 "params": { 00:22:20.893 "name": "Nvme0", 00:22:20.893 "trtype": "tcp", 00:22:20.893 "traddr": "10.0.0.2", 00:22:20.893 "adrfam": "ipv4", 00:22:20.893 "trsvcid": "4420", 00:22:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:20.893 "hdgst": false, 00:22:20.893 "ddgst": false 00:22:20.893 }, 00:22:20.893 "method": "bdev_nvme_attach_controller" 00:22:20.893 }' 00:22:20.893 [2024-07-22 17:00:22.482284] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:20.893 [2024-07-22 17:00:22.482460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69293 ] 00:22:21.151 [2024-07-22 17:00:22.666300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.410 [2024-07-22 17:00:22.976296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.668 [2024-07-22 17:00:23.240365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:21.926 Running I/O for 1 seconds... 00:22:22.953 00:22:22.953 Latency(us) 00:22:22.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.953 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.953 Verification LBA range: start 0x0 length 0x400 00:22:22.953 Nvme0n1 : 1.02 1508.61 94.29 0.00 0.00 41664.42 5180.46 36700.16 00:22:22.953 =================================================================================================================== 00:22:22.953 Total : 1508.61 94.29 0.00 0.00 41664.42 5180.46 36700.16 00:22:24.865 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 68: 69251 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:24.865 rmmod nvme_tcp 00:22:24.865 rmmod nvme_fabrics 00:22:24.865 rmmod nvme_keyring 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 69191 ']' 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 69191 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 69191 ']' 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 69191 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69191 00:22:24.865 killing process with pid 69191 00:22:24.865 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:24.866 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:24.866 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69191' 00:22:24.866 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 69191 00:22:24.866 17:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 69191 00:22:26.240 [2024-07-22 17:00:27.799418] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:26.498 ************************************ 00:22:26.498 END TEST nvmf_host_management 00:22:26.498 ************************************ 00:22:26.498 00:22:26.498 real 0m9.952s 00:22:26.498 user 0m39.050s 00:22:26.498 sys 0m1.971s 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:26.498 ************************************ 00:22:26.498 START TEST nvmf_lvol 00:22:26.498 ************************************ 00:22:26.498 17:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:22:26.498 * Looking for test storage... 00:22:26.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.498 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:26.499 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:26.757 Cannot find device "nvmf_tgt_br" 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.757 Cannot find device "nvmf_tgt_br2" 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:26.757 Cannot find device "nvmf_tgt_br" 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:22:26.757 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:26.757 Cannot find device "nvmf_tgt_br2" 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.758 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:27.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:22:27.016 00:22:27.016 --- 10.0.0.2 ping statistics --- 00:22:27.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.016 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:27.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:27.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:22:27.016 00:22:27.016 --- 10.0.0.3 ping statistics --- 00:22:27.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.016 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:27.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:22:27.016 00:22:27.016 --- 10.0.0.1 ping statistics --- 00:22:27.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.016 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=69550 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 69550 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 69550 ']' 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.016 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:27.016 [2024-07-22 17:00:28.630259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:27.016 [2024-07-22 17:00:28.630423] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.285 [2024-07-22 17:00:28.820812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:27.543 [2024-07-22 17:00:29.129877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.543 [2024-07-22 17:00:29.129963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.543 [2024-07-22 17:00:29.129977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.543 [2024-07-22 17:00:29.129993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.543 [2024-07-22 17:00:29.130005] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.543 [2024-07-22 17:00:29.130284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.543 [2024-07-22 17:00:29.130693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.543 [2024-07-22 17:00:29.130730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.801 [2024-07-22 17:00:29.405988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:28.058 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.058 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:22:28.058 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.058 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.058 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:28.058 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.058 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:28.315 [2024-07-22 17:00:29.848378] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.315 17:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:28.572 17:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:22:28.572 17:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:29.137 17:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:22:29.137 17:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:22:29.395 17:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:22:29.653 17:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3fea3b0d-0014-43f9-a510-566f30a549cd 00:22:29.653 17:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3fea3b0d-0014-43f9-a510-566f30a549cd lvol 20 00:22:29.911 17:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d7c26d62-8a20-42b8-8c96-c317d45f4727 00:22:29.911 17:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:29.911 17:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d7c26d62-8a20-42b8-8c96-c317d45f4727 00:22:30.169 17:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:30.427 [2024-07-22 17:00:31.892509] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.427 17:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.686 17:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=69626 00:22:30.686 17:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:22:30.686 17:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:22:31.655 17:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d7c26d62-8a20-42b8-8c96-c317d45f4727 MY_SNAPSHOT 00:22:31.924 17:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=17fccd94-8e24-44e4-a156-d9ae4c4fc2c2 00:22:31.924 17:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d7c26d62-8a20-42b8-8c96-c317d45f4727 30 00:22:32.186 17:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 17fccd94-8e24-44e4-a156-d9ae4c4fc2c2 MY_CLONE 00:22:32.754 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5188761d-fad0-47bf-a27f-4b85b5572b68 00:22:32.754 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5188761d-fad0-47bf-a27f-4b85b5572b68 00:22:33.330 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 69626 00:22:41.499 Initializing NVMe Controllers 00:22:41.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:22:41.499 Controller IO queue size 128, less than required. 00:22:41.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:22:41.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:22:41.499 Initialization complete. Launching workers. 00:22:41.499 ======================================================== 00:22:41.499 Latency(us) 00:22:41.499 Device Information : IOPS MiB/s Average min max 00:22:41.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9632.93 37.63 13288.10 474.62 199890.40 00:22:41.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9282.56 36.26 13786.47 4871.71 217769.29 00:22:41.499 ======================================================== 00:22:41.499 Total : 18915.49 73.89 13532.67 474.62 217769.29 00:22:41.499 00:22:41.499 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:41.499 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d7c26d62-8a20-42b8-8c96-c317d45f4727 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fea3b0d-0014-43f9-a510-566f30a549cd 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.767 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.767 rmmod nvme_tcp 00:22:42.030 rmmod nvme_fabrics 00:22:42.030 rmmod nvme_keyring 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 69550 ']' 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 69550 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 69550 ']' 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 69550 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69550 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69550' 00:22:42.030 killing process with pid 69550 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 69550 00:22:42.030 17:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 69550 00:22:43.933 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.933 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.933 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:43.934 00:22:43.934 real 0m17.380s 00:22:43.934 user 1m6.984s 00:22:43.934 sys 0m5.511s 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:43.934 ************************************ 00:22:43.934 END TEST nvmf_lvol 00:22:43.934 ************************************ 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:43.934 ************************************ 00:22:43.934 START TEST nvmf_lvs_grow 00:22:43.934 ************************************ 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:22:43.934 * Looking for test storage... 00:22:43.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.934 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.201 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:44.202 Cannot find device "nvmf_tgt_br" 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:44.202 Cannot find device "nvmf_tgt_br2" 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:44.202 Cannot find device "nvmf_tgt_br" 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:44.202 Cannot find device "nvmf_tgt_br2" 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:44.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:44.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:44.202 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:44.463 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:44.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:22:44.464 00:22:44.464 --- 10.0.0.2 ping statistics --- 00:22:44.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.464 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:44.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:44.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:22:44.464 00:22:44.464 --- 10.0.0.3 ping statistics --- 00:22:44.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.464 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:44.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:44.464 00:22:44.464 --- 10.0.0.1 ping statistics --- 00:22:44.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.464 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=69960 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 69960 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 69960 ']' 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.464 17:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:44.723 [2024-07-22 17:00:46.093389] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:44.723 [2024-07-22 17:00:46.094156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.723 [2024-07-22 17:00:46.279806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.290 [2024-07-22 17:00:46.614605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.290 [2024-07-22 17:00:46.614670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.290 [2024-07-22 17:00:46.614683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.290 [2024-07-22 17:00:46.614697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.290 [2024-07-22 17:00:46.614708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.290 [2024-07-22 17:00:46.614755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.290 [2024-07-22 17:00:46.880048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:45.550 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.550 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:22:45.550 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.550 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.550 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:45.550 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.550 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:45.808 [2024-07-22 17:00:47.340312] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:45.808 ************************************ 00:22:45.808 START TEST lvs_grow_clean 00:22:45.808 ************************************ 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:45.808 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:46.067 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:22:46.067 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:22:46.340 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=62c7928c-233f-4e47-a6f9-848d670300f5 00:22:46.340 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:22:46.340 17:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:22:46.629 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:22:46.629 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:22:46.629 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62c7928c-233f-4e47-a6f9-848d670300f5 lvol 150 00:22:46.888 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=52a17ebc-bfa4-4101-a66c-45b71fc7c0bd 00:22:46.888 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:46.888 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:22:47.147 [2024-07-22 17:00:48.603352] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:22:47.147 [2024-07-22 17:00:48.603450] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:22:47.147 true 00:22:47.147 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:22:47.147 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:22:47.405 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:22:47.405 17:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:47.662 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52a17ebc-bfa4-4101-a66c-45b71fc7c0bd 00:22:47.920 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:48.178 [2024-07-22 17:00:49.548051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.178 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=70049 00:22:48.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 70049 /var/tmp/bdevperf.sock 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 70049 ']' 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.436 17:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:22:48.436 [2024-07-22 17:00:49.925607] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:48.436 [2024-07-22 17:00:49.925952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70049 ] 00:22:48.693 [2024-07-22 17:00:50.094389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.951 [2024-07-22 17:00:50.372381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.208 [2024-07-22 17:00:50.646700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:49.208 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.208 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:22:49.208 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:22:49.774 Nvme0n1 00:22:49.774 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:22:49.774 [ 00:22:49.774 { 00:22:49.774 "name": "Nvme0n1", 00:22:49.774 "aliases": [ 00:22:49.774 "52a17ebc-bfa4-4101-a66c-45b71fc7c0bd" 00:22:49.774 ], 00:22:49.774 "product_name": "NVMe disk", 00:22:49.774 "block_size": 4096, 00:22:49.774 "num_blocks": 38912, 00:22:49.774 "uuid": "52a17ebc-bfa4-4101-a66c-45b71fc7c0bd", 00:22:49.774 "assigned_rate_limits": { 00:22:49.774 "rw_ios_per_sec": 0, 00:22:49.774 "rw_mbytes_per_sec": 0, 00:22:49.774 "r_mbytes_per_sec": 0, 00:22:49.774 "w_mbytes_per_sec": 0 00:22:49.774 }, 00:22:49.774 "claimed": false, 00:22:49.774 "zoned": false, 00:22:49.774 "supported_io_types": { 00:22:49.774 "read": true, 00:22:49.774 "write": true, 00:22:49.774 "unmap": true, 00:22:49.774 "flush": true, 00:22:49.774 "reset": true, 00:22:49.774 "nvme_admin": true, 00:22:49.774 "nvme_io": true, 00:22:49.774 "nvme_io_md": false, 00:22:49.774 "write_zeroes": true, 00:22:49.774 "zcopy": false, 00:22:49.774 "get_zone_info": false, 00:22:49.774 "zone_management": false, 00:22:49.774 "zone_append": false, 00:22:49.774 "compare": true, 00:22:49.774 "compare_and_write": true, 00:22:49.774 "abort": true, 00:22:49.774 "seek_hole": false, 00:22:49.774 "seek_data": false, 00:22:49.774 "copy": true, 00:22:49.774 "nvme_iov_md": false 00:22:49.774 }, 00:22:49.774 "memory_domains": [ 00:22:49.774 { 00:22:49.774 "dma_device_id": "system", 00:22:49.774 "dma_device_type": 1 00:22:49.774 } 00:22:49.774 ], 00:22:49.774 "driver_specific": { 00:22:49.774 "nvme": [ 00:22:49.774 { 00:22:49.774 "trid": { 00:22:49.774 "trtype": "TCP", 00:22:49.774 "adrfam": "IPv4", 00:22:49.774 "traddr": "10.0.0.2", 00:22:49.774 "trsvcid": "4420", 00:22:49.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:49.774 }, 00:22:49.774 "ctrlr_data": { 00:22:49.774 "cntlid": 1, 00:22:49.774 "vendor_id": "0x8086", 00:22:49.774 "model_number": "SPDK bdev Controller", 00:22:49.774 "serial_number": "SPDK0", 00:22:49.774 "firmware_revision": "24.09", 00:22:49.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:49.774 "oacs": { 00:22:49.774 "security": 0, 00:22:49.774 "format": 0, 00:22:49.774 "firmware": 0, 00:22:49.774 "ns_manage": 0 00:22:49.774 }, 00:22:49.774 "multi_ctrlr": true, 00:22:49.774 "ana_reporting": false 00:22:49.774 }, 00:22:49.774 "vs": { 00:22:49.774 "nvme_version": "1.3" 00:22:49.774 }, 00:22:49.774 "ns_data": { 00:22:49.774 "id": 1, 00:22:49.774 "can_share": true 00:22:49.774 } 00:22:49.774 } 00:22:49.774 ], 00:22:49.774 "mp_policy": "active_passive" 00:22:49.774 } 00:22:49.774 } 00:22:49.774 ] 00:22:49.774 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=70067 00:22:49.774 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.774 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:22:50.030 Running I/O for 10 seconds... 00:22:50.964 Latency(us) 00:22:50.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:50.964 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:22:50.964 =================================================================================================================== 00:22:50.964 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:22:50.964 00:22:51.899 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:22:51.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:51.899 Nvme0n1 : 2.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:22:51.899 =================================================================================================================== 00:22:51.899 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:22:51.899 00:22:52.157 true 00:22:52.157 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:22:52.157 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:52.416 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:52.416 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:52.416 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 70067 00:22:52.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:52.985 Nvme0n1 : 3.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:22:52.985 =================================================================================================================== 00:22:52.985 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:22:52.985 00:22:54.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:54.017 Nvme0n1 : 4.00 7270.75 28.40 0.00 0.00 0.00 0.00 0.00 00:22:54.017 =================================================================================================================== 00:22:54.017 Total : 7270.75 28.40 0.00 0.00 0.00 0.00 0.00 00:22:54.017 00:22:54.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:54.952 Nvme0n1 : 5.00 7264.40 28.38 0.00 0.00 0.00 0.00 0.00 00:22:54.952 =================================================================================================================== 00:22:54.952 Total : 7264.40 28.38 0.00 0.00 0.00 0.00 0.00 00:22:54.952 00:22:55.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:55.896 Nvme0n1 : 6.00 7217.83 28.19 0.00 0.00 0.00 0.00 0.00 00:22:55.896 =================================================================================================================== 00:22:55.896 Total : 7217.83 28.19 0.00 0.00 0.00 0.00 0.00 00:22:55.896 00:22:57.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:57.299 Nvme0n1 : 7.00 7202.71 28.14 0.00 0.00 0.00 0.00 0.00 00:22:57.299 =================================================================================================================== 00:22:57.300 Total : 7202.71 28.14 0.00 0.00 0.00 0.00 0.00 00:22:57.300 00:22:58.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:58.258 Nvme0n1 : 8.00 7191.38 28.09 0.00 0.00 0.00 0.00 0.00 00:22:58.258 =================================================================================================================== 00:22:58.258 Total : 7191.38 28.09 0.00 0.00 0.00 0.00 0.00 00:22:58.258 00:22:59.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:59.201 Nvme0n1 : 9.00 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:22:59.201 =================================================================================================================== 00:22:59.201 Total : 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:22:59.201 00:23:00.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:00.145 Nvme0n1 : 10.00 7150.10 27.93 0.00 0.00 0.00 0.00 0.00 00:23:00.145 =================================================================================================================== 00:23:00.145 Total : 7150.10 27.93 0.00 0.00 0.00 0.00 0.00 00:23:00.145 00:23:00.145 00:23:00.145 Latency(us) 00:23:00.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:00.145 Nvme0n1 : 10.01 7155.58 27.95 0.00 0.00 17882.60 14355.50 39446.43 00:23:00.145 =================================================================================================================== 00:23:00.145 Total : 7155.58 27.95 0.00 0.00 17882.60 14355.50 39446.43 00:23:00.145 0 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 70049 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 70049 ']' 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 70049 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70049 00:23:00.145 killing process with pid 70049 00:23:00.145 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.145 00:23:00.145 Latency(us) 00:23:00.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.145 =================================================================================================================== 00:23:00.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70049' 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 70049 00:23:00.145 17:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 70049 00:23:01.519 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:01.778 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:02.037 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:23:02.037 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:23:02.296 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:23:02.296 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:23:02.296 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:02.555 [2024-07-22 17:01:04.017969] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:02.555 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:23:02.814 request: 00:23:02.814 { 00:23:02.814 "uuid": "62c7928c-233f-4e47-a6f9-848d670300f5", 00:23:02.814 "method": "bdev_lvol_get_lvstores", 00:23:02.814 "req_id": 1 00:23:02.814 } 00:23:02.814 Got JSON-RPC error response 00:23:02.814 response: 00:23:02.814 { 00:23:02.814 "code": -19, 00:23:02.814 "message": "No such device" 00:23:02.814 } 00:23:02.814 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:23:02.814 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.814 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.814 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.814 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:03.073 aio_bdev 00:23:03.073 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 52a17ebc-bfa4-4101-a66c-45b71fc7c0bd 00:23:03.073 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=52a17ebc-bfa4-4101-a66c-45b71fc7c0bd 00:23:03.073 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:03.073 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:23:03.073 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:03.073 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:03.073 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:03.350 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52a17ebc-bfa4-4101-a66c-45b71fc7c0bd -t 2000 00:23:03.350 [ 00:23:03.350 { 00:23:03.350 "name": "52a17ebc-bfa4-4101-a66c-45b71fc7c0bd", 00:23:03.350 "aliases": [ 00:23:03.350 "lvs/lvol" 00:23:03.350 ], 00:23:03.350 "product_name": "Logical Volume", 00:23:03.350 "block_size": 4096, 00:23:03.350 "num_blocks": 38912, 00:23:03.350 "uuid": "52a17ebc-bfa4-4101-a66c-45b71fc7c0bd", 00:23:03.350 "assigned_rate_limits": { 00:23:03.350 "rw_ios_per_sec": 0, 00:23:03.350 "rw_mbytes_per_sec": 0, 00:23:03.350 "r_mbytes_per_sec": 0, 00:23:03.350 "w_mbytes_per_sec": 0 00:23:03.350 }, 00:23:03.350 "claimed": false, 00:23:03.350 "zoned": false, 00:23:03.350 "supported_io_types": { 00:23:03.350 "read": true, 00:23:03.350 "write": true, 00:23:03.350 "unmap": true, 00:23:03.350 "flush": false, 00:23:03.350 "reset": true, 00:23:03.350 "nvme_admin": false, 00:23:03.350 "nvme_io": false, 00:23:03.350 "nvme_io_md": false, 00:23:03.350 "write_zeroes": true, 00:23:03.350 "zcopy": false, 00:23:03.350 "get_zone_info": false, 00:23:03.350 "zone_management": false, 00:23:03.350 "zone_append": false, 00:23:03.350 "compare": false, 00:23:03.350 "compare_and_write": false, 00:23:03.350 "abort": false, 00:23:03.350 "seek_hole": true, 00:23:03.350 "seek_data": true, 00:23:03.350 "copy": false, 00:23:03.350 "nvme_iov_md": false 00:23:03.350 }, 00:23:03.350 "driver_specific": { 00:23:03.350 "lvol": { 00:23:03.350 "lvol_store_uuid": "62c7928c-233f-4e47-a6f9-848d670300f5", 00:23:03.350 "base_bdev": "aio_bdev", 00:23:03.350 "thin_provision": false, 00:23:03.350 "num_allocated_clusters": 38, 00:23:03.350 "snapshot": false, 00:23:03.350 "clone": false, 00:23:03.350 "esnap_clone": false 00:23:03.350 } 00:23:03.350 } 00:23:03.350 } 00:23:03.350 ] 00:23:03.350 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:23:03.350 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:23:03.350 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:23:03.614 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:23:03.614 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:23:03.614 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:23:03.873 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:23:03.873 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 52a17ebc-bfa4-4101-a66c-45b71fc7c0bd 00:23:04.132 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62c7928c-233f-4e47-a6f9-848d670300f5 00:23:04.390 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:04.649 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:05.215 ************************************ 00:23:05.215 END TEST lvs_grow_clean 00:23:05.215 ************************************ 00:23:05.215 00:23:05.215 real 0m19.198s 00:23:05.215 user 0m17.473s 00:23:05.215 sys 0m2.979s 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:05.215 ************************************ 00:23:05.215 START TEST lvs_grow_dirty 00:23:05.215 ************************************ 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:05.215 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:05.474 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:05.474 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:05.474 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:05.732 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:05.732 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:05.732 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:05.732 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:05.732 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3b313db-1cb9-4516-996e-147ca190e2d5 lvol 150 00:23:05.989 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2ca2a8cc-8ca4-4635-8790-dc761d54fd5a 00:23:05.989 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:05.989 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:06.248 [2024-07-22 17:01:07.687317] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:06.248 [2024-07-22 17:01:07.687416] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:06.248 true 00:23:06.248 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:06.248 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:06.507 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:06.507 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:06.765 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2ca2a8cc-8ca4-4635-8790-dc761d54fd5a 00:23:07.023 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:07.023 [2024-07-22 17:01:08.607994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.023 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=70321 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 70321 /var/tmp/bdevperf.sock 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 70321 ']' 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.282 17:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:07.540 [2024-07-22 17:01:08.974094] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:07.540 [2024-07-22 17:01:08.974549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70321 ] 00:23:07.798 [2024-07-22 17:01:09.158572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.057 [2024-07-22 17:01:09.440709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.315 [2024-07-22 17:01:09.689917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:08.315 17:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.315 17:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:23:08.315 17:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:08.572 Nvme0n1 00:23:08.572 17:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:08.830 [ 00:23:08.830 { 00:23:08.830 "name": "Nvme0n1", 00:23:08.830 "aliases": [ 00:23:08.830 "2ca2a8cc-8ca4-4635-8790-dc761d54fd5a" 00:23:08.830 ], 00:23:08.830 "product_name": "NVMe disk", 00:23:08.830 "block_size": 4096, 00:23:08.830 "num_blocks": 38912, 00:23:08.830 "uuid": "2ca2a8cc-8ca4-4635-8790-dc761d54fd5a", 00:23:08.830 "assigned_rate_limits": { 00:23:08.830 "rw_ios_per_sec": 0, 00:23:08.830 "rw_mbytes_per_sec": 0, 00:23:08.830 "r_mbytes_per_sec": 0, 00:23:08.830 "w_mbytes_per_sec": 0 00:23:08.830 }, 00:23:08.830 "claimed": false, 00:23:08.830 "zoned": false, 00:23:08.830 "supported_io_types": { 00:23:08.830 "read": true, 00:23:08.830 "write": true, 00:23:08.830 "unmap": true, 00:23:08.830 "flush": true, 00:23:08.830 "reset": true, 00:23:08.830 "nvme_admin": true, 00:23:08.830 "nvme_io": true, 00:23:08.830 "nvme_io_md": false, 00:23:08.830 "write_zeroes": true, 00:23:08.830 "zcopy": false, 00:23:08.830 "get_zone_info": false, 00:23:08.830 "zone_management": false, 00:23:08.830 "zone_append": false, 00:23:08.830 "compare": true, 00:23:08.830 "compare_and_write": true, 00:23:08.830 "abort": true, 00:23:08.830 "seek_hole": false, 00:23:08.830 "seek_data": false, 00:23:08.830 "copy": true, 00:23:08.830 "nvme_iov_md": false 00:23:08.830 }, 00:23:08.830 "memory_domains": [ 00:23:08.830 { 00:23:08.830 "dma_device_id": "system", 00:23:08.830 "dma_device_type": 1 00:23:08.830 } 00:23:08.830 ], 00:23:08.830 "driver_specific": { 00:23:08.830 "nvme": [ 00:23:08.830 { 00:23:08.830 "trid": { 00:23:08.830 "trtype": "TCP", 00:23:08.830 "adrfam": "IPv4", 00:23:08.830 "traddr": "10.0.0.2", 00:23:08.830 "trsvcid": "4420", 00:23:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:08.830 }, 00:23:08.830 "ctrlr_data": { 00:23:08.830 "cntlid": 1, 00:23:08.830 "vendor_id": "0x8086", 00:23:08.830 "model_number": "SPDK bdev Controller", 00:23:08.830 "serial_number": "SPDK0", 00:23:08.830 "firmware_revision": "24.09", 00:23:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:08.830 "oacs": { 00:23:08.830 "security": 0, 00:23:08.830 "format": 0, 00:23:08.830 "firmware": 0, 00:23:08.830 "ns_manage": 0 00:23:08.830 }, 00:23:08.830 "multi_ctrlr": true, 00:23:08.830 "ana_reporting": false 00:23:08.830 }, 00:23:08.830 "vs": { 00:23:08.830 "nvme_version": "1.3" 00:23:08.830 }, 00:23:08.830 "ns_data": { 00:23:08.830 "id": 1, 00:23:08.830 "can_share": true 00:23:08.830 } 00:23:08.830 } 00:23:08.830 ], 00:23:08.830 "mp_policy": "active_passive" 00:23:08.830 } 00:23:08.830 } 00:23:08.830 ] 00:23:08.830 17:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=70344 00:23:08.830 17:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:08.830 17:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.089 Running I/O for 10 seconds... 00:23:10.038 Latency(us) 00:23:10.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:10.038 Nvme0n1 : 1.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:23:10.038 =================================================================================================================== 00:23:10.038 Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:23:10.038 00:23:10.971 17:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:10.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:10.971 Nvme0n1 : 2.00 7810.50 30.51 0.00 0.00 0.00 0.00 0.00 00:23:10.971 =================================================================================================================== 00:23:10.971 Total : 7810.50 30.51 0.00 0.00 0.00 0.00 0.00 00:23:10.971 00:23:11.229 true 00:23:11.230 17:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:11.230 17:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:11.488 17:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:11.488 17:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:11.488 17:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 70344 00:23:12.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:12.131 Nvme0n1 : 3.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:23:12.131 =================================================================================================================== 00:23:12.131 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:23:12.131 00:23:13.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:13.067 Nvme0n1 : 4.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:23:13.067 =================================================================================================================== 00:23:13.067 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:23:13.067 00:23:14.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:14.004 Nvme0n1 : 5.00 7569.20 29.57 0.00 0.00 0.00 0.00 0.00 00:23:14.004 =================================================================================================================== 00:23:14.004 Total : 7569.20 29.57 0.00 0.00 0.00 0.00 0.00 00:23:14.004 00:23:14.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:14.959 Nvme0n1 : 6.00 7451.67 29.11 0.00 0.00 0.00 0.00 0.00 00:23:14.959 =================================================================================================================== 00:23:14.959 Total : 7451.67 29.11 0.00 0.00 0.00 0.00 0.00 00:23:14.959 00:23:15.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:15.896 Nvme0n1 : 7.00 7421.29 28.99 0.00 0.00 0.00 0.00 0.00 00:23:15.896 =================================================================================================================== 00:23:15.896 Total : 7421.29 28.99 0.00 0.00 0.00 0.00 0.00 00:23:15.896 00:23:17.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:17.298 Nvme0n1 : 8.00 7414.38 28.96 0.00 0.00 0.00 0.00 0.00 00:23:17.298 =================================================================================================================== 00:23:17.298 Total : 7414.38 28.96 0.00 0.00 0.00 0.00 0.00 00:23:17.298 00:23:17.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:17.893 Nvme0n1 : 9.00 7380.78 28.83 0.00 0.00 0.00 0.00 0.00 00:23:17.893 =================================================================================================================== 00:23:17.893 Total : 7380.78 28.83 0.00 0.00 0.00 0.00 0.00 00:23:17.893 00:23:19.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:19.273 Nvme0n1 : 10.00 7366.60 28.78 0.00 0.00 0.00 0.00 0.00 00:23:19.273 =================================================================================================================== 00:23:19.273 Total : 7366.60 28.78 0.00 0.00 0.00 0.00 0.00 00:23:19.273 00:23:19.273 00:23:19.273 Latency(us) 00:23:19.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:19.273 Nvme0n1 : 10.02 7367.79 28.78 0.00 0.00 17367.71 5118.05 80390.83 00:23:19.273 =================================================================================================================== 00:23:19.273 Total : 7367.79 28.78 0.00 0.00 17367.71 5118.05 80390.83 00:23:19.273 0 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 70321 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 70321 ']' 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 70321 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70321 00:23:19.273 killing process with pid 70321 00:23:19.273 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.273 00:23:19.273 Latency(us) 00:23:19.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.273 =================================================================================================================== 00:23:19.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70321' 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 70321 00:23:19.273 17:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 70321 00:23:20.651 17:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:20.651 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:20.959 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:20.960 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 69960 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 69960 00:23:21.231 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 69960 Killed "${NVMF_APP[@]}" "$@" 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:21.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=70488 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 70488 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 70488 ']' 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.231 17:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:21.231 [2024-07-22 17:01:22.767268] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:21.231 [2024-07-22 17:01:22.767403] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.490 [2024-07-22 17:01:22.943111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.749 [2024-07-22 17:01:23.195706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.749 [2024-07-22 17:01:23.195979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.749 [2024-07-22 17:01:23.196104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.749 [2024-07-22 17:01:23.196223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.749 [2024-07-22 17:01:23.196286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.749 [2024-07-22 17:01:23.196414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.008 [2024-07-22 17:01:23.448831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:22.266 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.266 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:23:22.266 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.266 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:22.266 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:22.266 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.266 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:22.525 [2024-07-22 17:01:23.912729] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:22.525 [2024-07-22 17:01:23.913314] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:22.525 [2024-07-22 17:01:23.913643] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2ca2a8cc-8ca4-4635-8790-dc761d54fd5a 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=2ca2a8cc-8ca4-4635-8790-dc761d54fd5a 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:22.525 17:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:22.783 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ca2a8cc-8ca4-4635-8790-dc761d54fd5a -t 2000 00:23:22.783 [ 00:23:22.783 { 00:23:22.783 "name": "2ca2a8cc-8ca4-4635-8790-dc761d54fd5a", 00:23:22.783 "aliases": [ 00:23:22.783 "lvs/lvol" 00:23:22.783 ], 00:23:22.783 "product_name": "Logical Volume", 00:23:22.783 "block_size": 4096, 00:23:22.783 "num_blocks": 38912, 00:23:22.783 "uuid": "2ca2a8cc-8ca4-4635-8790-dc761d54fd5a", 00:23:22.783 "assigned_rate_limits": { 00:23:22.783 "rw_ios_per_sec": 0, 00:23:22.783 "rw_mbytes_per_sec": 0, 00:23:22.783 "r_mbytes_per_sec": 0, 00:23:22.783 "w_mbytes_per_sec": 0 00:23:22.783 }, 00:23:22.783 "claimed": false, 00:23:22.783 "zoned": false, 00:23:22.783 "supported_io_types": { 00:23:22.783 "read": true, 00:23:22.783 "write": true, 00:23:22.783 "unmap": true, 00:23:22.783 "flush": false, 00:23:22.783 "reset": true, 00:23:22.783 "nvme_admin": false, 00:23:22.783 "nvme_io": false, 00:23:22.783 "nvme_io_md": false, 00:23:22.783 "write_zeroes": true, 00:23:22.783 "zcopy": false, 00:23:22.783 "get_zone_info": false, 00:23:22.783 "zone_management": false, 00:23:22.783 "zone_append": false, 00:23:22.783 "compare": false, 00:23:22.783 "compare_and_write": false, 00:23:22.783 "abort": false, 00:23:22.783 "seek_hole": true, 00:23:22.783 "seek_data": true, 00:23:22.783 "copy": false, 00:23:22.783 "nvme_iov_md": false 00:23:22.783 }, 00:23:22.783 "driver_specific": { 00:23:22.783 "lvol": { 00:23:22.783 "lvol_store_uuid": "f3b313db-1cb9-4516-996e-147ca190e2d5", 00:23:22.783 "base_bdev": "aio_bdev", 00:23:22.783 "thin_provision": false, 00:23:22.783 "num_allocated_clusters": 38, 00:23:22.783 "snapshot": false, 00:23:22.783 "clone": false, 00:23:22.783 "esnap_clone": false 00:23:22.783 } 00:23:22.783 } 00:23:22.783 } 00:23:22.783 ] 00:23:23.043 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:23:23.043 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:23.043 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:23:23.302 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:23:23.302 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:23.302 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:23:23.560 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:23:23.560 17:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:23.560 [2024-07-22 17:01:25.114095] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:23.560 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:23.818 request: 00:23:23.819 { 00:23:23.819 "uuid": "f3b313db-1cb9-4516-996e-147ca190e2d5", 00:23:23.819 "method": "bdev_lvol_get_lvstores", 00:23:23.819 "req_id": 1 00:23:23.819 } 00:23:23.819 Got JSON-RPC error response 00:23:23.819 response: 00:23:23.819 { 00:23:23.819 "code": -19, 00:23:23.819 "message": "No such device" 00:23:23.819 } 00:23:23.819 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:23:23.819 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:23.819 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:23.819 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:23.819 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:24.077 aio_bdev 00:23:24.077 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2ca2a8cc-8ca4-4635-8790-dc761d54fd5a 00:23:24.077 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=2ca2a8cc-8ca4-4635-8790-dc761d54fd5a 00:23:24.077 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:24.077 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:23:24.077 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:24.077 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:24.077 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:24.353 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ca2a8cc-8ca4-4635-8790-dc761d54fd5a -t 2000 00:23:24.353 [ 00:23:24.353 { 00:23:24.353 "name": "2ca2a8cc-8ca4-4635-8790-dc761d54fd5a", 00:23:24.353 "aliases": [ 00:23:24.353 "lvs/lvol" 00:23:24.353 ], 00:23:24.353 "product_name": "Logical Volume", 00:23:24.353 "block_size": 4096, 00:23:24.353 "num_blocks": 38912, 00:23:24.353 "uuid": "2ca2a8cc-8ca4-4635-8790-dc761d54fd5a", 00:23:24.353 "assigned_rate_limits": { 00:23:24.353 "rw_ios_per_sec": 0, 00:23:24.353 "rw_mbytes_per_sec": 0, 00:23:24.353 "r_mbytes_per_sec": 0, 00:23:24.353 "w_mbytes_per_sec": 0 00:23:24.353 }, 00:23:24.353 "claimed": false, 00:23:24.353 "zoned": false, 00:23:24.353 "supported_io_types": { 00:23:24.353 "read": true, 00:23:24.353 "write": true, 00:23:24.353 "unmap": true, 00:23:24.353 "flush": false, 00:23:24.353 "reset": true, 00:23:24.353 "nvme_admin": false, 00:23:24.353 "nvme_io": false, 00:23:24.353 "nvme_io_md": false, 00:23:24.353 "write_zeroes": true, 00:23:24.353 "zcopy": false, 00:23:24.353 "get_zone_info": false, 00:23:24.353 "zone_management": false, 00:23:24.353 "zone_append": false, 00:23:24.353 "compare": false, 00:23:24.353 "compare_and_write": false, 00:23:24.353 "abort": false, 00:23:24.353 "seek_hole": true, 00:23:24.353 "seek_data": true, 00:23:24.353 "copy": false, 00:23:24.353 "nvme_iov_md": false 00:23:24.353 }, 00:23:24.353 "driver_specific": { 00:23:24.353 "lvol": { 00:23:24.353 "lvol_store_uuid": "f3b313db-1cb9-4516-996e-147ca190e2d5", 00:23:24.353 "base_bdev": "aio_bdev", 00:23:24.353 "thin_provision": false, 00:23:24.353 "num_allocated_clusters": 38, 00:23:24.353 "snapshot": false, 00:23:24.353 "clone": false, 00:23:24.353 "esnap_clone": false 00:23:24.353 } 00:23:24.353 } 00:23:24.353 } 00:23:24.353 ] 00:23:24.353 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:23:24.353 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:24.353 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:23:24.611 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:23:24.611 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:23:24.611 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:24.881 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:23:24.881 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ca2a8cc-8ca4-4635-8790-dc761d54fd5a 00:23:25.140 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3b313db-1cb9-4516-996e-147ca190e2d5 00:23:25.399 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:25.656 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:26.230 ************************************ 00:23:26.231 END TEST lvs_grow_dirty 00:23:26.231 ************************************ 00:23:26.231 00:23:26.231 real 0m20.903s 00:23:26.231 user 0m45.215s 00:23:26.231 sys 0m7.828s 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:26.231 nvmf_trace.0 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.231 rmmod nvme_tcp 00:23:26.231 rmmod nvme_fabrics 00:23:26.231 rmmod nvme_keyring 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 70488 ']' 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 70488 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 70488 ']' 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 70488 00:23:26.231 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:23:26.494 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.494 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70488 00:23:26.494 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:26.494 killing process with pid 70488 00:23:26.494 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:26.494 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70488' 00:23:26.494 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 70488 00:23:26.494 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 70488 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:27.876 00:23:27.876 real 0m43.874s 00:23:27.876 user 1m9.680s 00:23:27.876 sys 0m11.704s 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:27.876 ************************************ 00:23:27.876 END TEST nvmf_lvs_grow 00:23:27.876 ************************************ 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:27.876 ************************************ 00:23:27.876 START TEST nvmf_bdev_io_wait 00:23:27.876 ************************************ 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:23:27.876 * Looking for test storage... 00:23:27.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.876 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.877 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:28.135 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:28.136 Cannot find device "nvmf_tgt_br" 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.136 Cannot find device "nvmf_tgt_br2" 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:28.136 Cannot find device "nvmf_tgt_br" 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:28.136 Cannot find device "nvmf_tgt_br2" 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:28.136 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:28.404 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:28.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:23:28.405 00:23:28.405 --- 10.0.0.2 ping statistics --- 00:23:28.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.405 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:28.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:28.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:23:28.405 00:23:28.405 --- 10.0.0.3 ping statistics --- 00:23:28.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.405 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:28.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:28.405 00:23:28.405 --- 10.0.0.1 ping statistics --- 00:23:28.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.405 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=70813 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 70813 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 70813 ']' 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.405 17:01:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:28.668 [2024-07-22 17:01:30.044556] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:28.668 [2024-07-22 17:01:30.044725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.668 [2024-07-22 17:01:30.233460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.925 [2024-07-22 17:01:30.519189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.925 [2024-07-22 17:01:30.519304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.925 [2024-07-22 17:01:30.519332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.925 [2024-07-22 17:01:30.519356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.925 [2024-07-22 17:01:30.519379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.926 [2024-07-22 17:01:30.520211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.926 [2024-07-22 17:01:30.520407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.926 [2024-07-22 17:01:30.520515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.926 [2024-07-22 17:01:30.520521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.491 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.491 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:23:29.491 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.491 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.491 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.491 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:29.750 [2024-07-22 17:01:31.289901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:29.750 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.750 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.750 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.750 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:29.750 [2024-07-22 17:01:31.311131] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.750 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.751 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:29.751 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.751 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 Malloc0 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 [2024-07-22 17:01:31.464131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=70854 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=70856 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.085 { 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme$subsystem", 00:23:30.085 "trtype": "$TEST_TRANSPORT", 00:23:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "$NVMF_PORT", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.085 "hdgst": ${hdgst:-false}, 00:23:30.085 "ddgst": ${ddgst:-false} 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 } 00:23:30.085 EOF 00:23:30.085 )") 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=70858 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.085 { 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme$subsystem", 00:23:30.085 "trtype": "$TEST_TRANSPORT", 00:23:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "$NVMF_PORT", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.085 "hdgst": ${hdgst:-false}, 00:23:30.085 "ddgst": ${ddgst:-false} 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 } 00:23:30.085 EOF 00:23:30.085 )") 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=70859 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.085 { 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme$subsystem", 00:23:30.085 "trtype": "$TEST_TRANSPORT", 00:23:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "$NVMF_PORT", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.085 "hdgst": ${hdgst:-false}, 00:23:30.085 "ddgst": ${ddgst:-false} 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 } 00:23:30.085 EOF 00:23:30.085 )") 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.085 { 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme$subsystem", 00:23:30.085 "trtype": "$TEST_TRANSPORT", 00:23:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "$NVMF_PORT", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.085 "hdgst": ${hdgst:-false}, 00:23:30.085 "ddgst": ${ddgst:-false} 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 } 00:23:30.085 EOF 00:23:30.085 )") 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme1", 00:23:30.085 "trtype": "tcp", 00:23:30.085 "traddr": "10.0.0.2", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "4420", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.085 "hdgst": false, 00:23:30.085 "ddgst": false 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 }' 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme1", 00:23:30.085 "trtype": "tcp", 00:23:30.085 "traddr": "10.0.0.2", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "4420", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.085 "hdgst": false, 00:23:30.085 "ddgst": false 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 }' 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme1", 00:23:30.085 "trtype": "tcp", 00:23:30.085 "traddr": "10.0.0.2", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "4420", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.085 "hdgst": false, 00:23:30.085 "ddgst": false 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 }' 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:30.085 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme1", 00:23:30.085 "trtype": "tcp", 00:23:30.085 "traddr": "10.0.0.2", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "4420", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.085 "hdgst": false, 00:23:30.085 "ddgst": false 00:23:30.086 }, 00:23:30.086 "method": "bdev_nvme_attach_controller" 00:23:30.086 }' 00:23:30.086 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 70854 00:23:30.086 [2024-07-22 17:01:31.594580] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:30.086 [2024-07-22 17:01:31.594719] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:23:30.086 [2024-07-22 17:01:31.597218] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:30.086 [2024-07-22 17:01:31.597351] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:23:30.086 [2024-07-22 17:01:31.598564] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:30.086 [2024-07-22 17:01:31.599077] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:23:30.086 [2024-07-22 17:01:31.630334] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:30.086 [2024-07-22 17:01:31.630554] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:30.343 [2024-07-22 17:01:31.833055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.343 [2024-07-22 17:01:31.893265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.343 [2024-07-22 17:01:31.958476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.602 [2024-07-22 17:01:32.022055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.602 [2024-07-22 17:01:32.129126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:23:30.602 [2024-07-22 17:01:32.151421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:30.602 [2024-07-22 17:01:32.213841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:30.859 [2024-07-22 17:01:32.313111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:30.859 [2024-07-22 17:01:32.398338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:30.859 [2024-07-22 17:01:32.464718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:31.116 [2024-07-22 17:01:32.490174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:31.116 [2024-07-22 17:01:32.589508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:31.116 Running I/O for 1 seconds... 00:23:31.116 Running I/O for 1 seconds... 00:23:31.116 Running I/O for 1 seconds... 00:23:31.374 Running I/O for 1 seconds... 00:23:32.308 00:23:32.309 Latency(us) 00:23:32.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.309 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:23:32.309 Nvme1n1 : 1.02 6331.61 24.73 0.00 0.00 20081.39 6616.02 29459.99 00:23:32.309 =================================================================================================================== 00:23:32.309 Total : 6331.61 24.73 0.00 0.00 20081.39 6616.02 29459.99 00:23:32.309 00:23:32.309 Latency(us) 00:23:32.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.309 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:23:32.309 Nvme1n1 : 1.00 168136.47 656.78 0.00 0.00 758.65 399.85 2278.16 00:23:32.309 =================================================================================================================== 00:23:32.309 Total : 168136.47 656.78 0.00 0.00 758.65 399.85 2278.16 00:23:32.309 00:23:32.309 Latency(us) 00:23:32.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.309 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:23:32.309 Nvme1n1 : 1.01 7252.05 28.33 0.00 0.00 17536.37 7583.45 24466.77 00:23:32.309 =================================================================================================================== 00:23:32.309 Total : 7252.05 28.33 0.00 0.00 17536.37 7583.45 24466.77 00:23:32.309 00:23:32.309 Latency(us) 00:23:32.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.309 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:23:32.309 Nvme1n1 : 1.01 6142.62 23.99 0.00 0.00 20769.01 5991.86 45937.62 00:23:32.309 =================================================================================================================== 00:23:32.309 Total : 6142.62 23.99 0.00 0.00 20769.01 5991.86 45937.62 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 70856 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 70858 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 70859 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.725 rmmod nvme_tcp 00:23:33.725 rmmod nvme_fabrics 00:23:33.725 rmmod nvme_keyring 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 70813 ']' 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 70813 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 70813 ']' 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 70813 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70813 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70813' 00:23:33.725 killing process with pid 70813 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 70813 00:23:33.725 17:01:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 70813 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:35.125 00:23:35.125 real 0m7.328s 00:23:35.125 user 0m34.215s 00:23:35.125 sys 0m2.871s 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.125 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:35.125 ************************************ 00:23:35.125 END TEST nvmf_bdev_io_wait 00:23:35.125 ************************************ 00:23:35.409 17:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:23:35.409 17:01:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:23:35.409 17:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:35.409 17:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.409 17:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:35.409 ************************************ 00:23:35.409 START TEST nvmf_queue_depth 00:23:35.409 ************************************ 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:23:35.410 * Looking for test storage... 00:23:35.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:35.410 Cannot find device "nvmf_tgt_br" 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:35.410 Cannot find device "nvmf_tgt_br2" 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:35.410 Cannot find device "nvmf_tgt_br" 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:35.410 Cannot find device "nvmf_tgt_br2" 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:35.410 17:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:35.410 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:35.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.410 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:23:35.410 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:35.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.410 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:23:35.410 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:35.410 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:35.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:23:35.684 00:23:35.684 --- 10.0.0.2 ping statistics --- 00:23:35.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.684 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:35.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:35.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:23:35.684 00:23:35.684 --- 10.0.0.3 ping statistics --- 00:23:35.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.684 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:35.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:35.684 00:23:35.684 --- 10.0.0.1 ping statistics --- 00:23:35.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.684 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=71122 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 71122 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 71122 ']' 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.684 17:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:35.943 [2024-07-22 17:01:37.418551] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:35.943 [2024-07-22 17:01:37.418720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.201 [2024-07-22 17:01:37.610234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.460 [2024-07-22 17:01:37.906769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.460 [2024-07-22 17:01:37.906829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.460 [2024-07-22 17:01:37.906842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.460 [2024-07-22 17:01:37.906856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.460 [2024-07-22 17:01:37.906867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.460 [2024-07-22 17:01:37.906916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.719 [2024-07-22 17:01:38.155276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:36.977 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.977 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:36.978 [2024-07-22 17:01:38.422936] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:36.978 Malloc0 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:36.978 [2024-07-22 17:01:38.546592] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=71160 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 71160 /var/tmp/bdevperf.sock 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 71160 ']' 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.978 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 [2024-07-22 17:01:38.669983] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:37.237 [2024-07-22 17:01:38.670165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71160 ] 00:23:37.494 [2024-07-22 17:01:38.860008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.754 [2024-07-22 17:01:39.168177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.013 [2024-07-22 17:01:39.430058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:38.013 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.277 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:23:38.277 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.277 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.277 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:38.277 NVMe0n1 00:23:38.277 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.277 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.277 Running I/O for 10 seconds... 00:23:48.365 00:23:48.365 Latency(us) 00:23:48.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.365 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:23:48.365 Verification LBA range: start 0x0 length 0x4000 00:23:48.365 NVMe0n1 : 10.08 7471.22 29.18 0.00 0.00 136378.12 22094.99 94871.16 00:23:48.365 =================================================================================================================== 00:23:48.365 Total : 7471.22 29.18 0.00 0.00 136378.12 22094.99 94871.16 00:23:48.365 0 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 71160 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 71160 ']' 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 71160 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71160 00:23:48.365 killing process with pid 71160 00:23:48.365 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.365 00:23:48.365 Latency(us) 00:23:48.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.365 =================================================================================================================== 00:23:48.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71160' 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 71160 00:23:48.365 17:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 71160 00:23:50.265 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:23:50.265 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:23:50.265 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.266 rmmod nvme_tcp 00:23:50.266 rmmod nvme_fabrics 00:23:50.266 rmmod nvme_keyring 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 71122 ']' 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 71122 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 71122 ']' 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 71122 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71122 00:23:50.266 killing process with pid 71122 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71122' 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 71122 00:23:50.266 17:01:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 71122 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:51.642 00:23:51.642 real 0m16.447s 00:23:51.642 user 0m27.296s 00:23:51.642 sys 0m2.690s 00:23:51.642 ************************************ 00:23:51.642 END TEST nvmf_queue_depth 00:23:51.642 ************************************ 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.642 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:51.900 ************************************ 00:23:51.900 START TEST nvmf_target_multipath 00:23:51.900 ************************************ 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:23:51.900 * Looking for test storage... 00:23:51.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:51.900 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:51.901 Cannot find device "nvmf_tgt_br" 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.901 Cannot find device "nvmf_tgt_br2" 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:51.901 Cannot find device "nvmf_tgt_br" 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:51.901 Cannot find device "nvmf_tgt_br2" 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:51.901 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:52.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:52.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:52.159 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:52.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:23:52.160 00:23:52.160 --- 10.0.0.2 ping statistics --- 00:23:52.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.160 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:52.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:52.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:23:52.160 00:23:52.160 --- 10.0.0.3 ping statistics --- 00:23:52.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.160 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:52.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:23:52.160 00:23:52.160 --- 10.0.0.1 ping statistics --- 00:23:52.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.160 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.160 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:52.418 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=71506 00:23:52.418 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.418 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 71506 00:23:52.418 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 71506 ']' 00:23:52.418 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.418 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.418 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.419 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.419 17:01:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:52.419 [2024-07-22 17:01:53.912580] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:52.419 [2024-07-22 17:01:53.912755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.678 [2024-07-22 17:01:54.101133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.948 [2024-07-22 17:01:54.444234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.948 [2024-07-22 17:01:54.444316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.948 [2024-07-22 17:01:54.444333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.948 [2024-07-22 17:01:54.444349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.948 [2024-07-22 17:01:54.444365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.948 [2024-07-22 17:01:54.444622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.948 [2024-07-22 17:01:54.444806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.948 [2024-07-22 17:01:54.445760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.948 [2024-07-22 17:01:54.445782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.215 [2024-07-22 17:01:54.727026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:53.474 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.474 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:23:53.474 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.474 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.474 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:53.474 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.474 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:53.732 [2024-07-22 17:01:55.158638] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.732 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:53.990 Malloc0 00:23:53.990 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:23:54.249 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.508 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.508 [2024-07-22 17:01:56.115414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.766 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:54.766 [2024-07-22 17:01:56.323532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:54.766 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:23:55.042 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:23:55.042 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:23:55.042 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:23:55.042 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.042 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:55.042 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:57.575 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:57.576 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:57.576 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:57.576 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:23:57.576 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=71596 00:23:57.576 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:23:57.576 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:23:57.576 [global] 00:23:57.576 thread=1 00:23:57.576 invalidate=1 00:23:57.576 rw=randrw 00:23:57.576 time_based=1 00:23:57.576 runtime=6 00:23:57.576 ioengine=libaio 00:23:57.576 direct=1 00:23:57.576 bs=4096 00:23:57.576 iodepth=128 00:23:57.576 norandommap=0 00:23:57.576 numjobs=1 00:23:57.576 00:23:57.576 verify_dump=1 00:23:57.576 verify_backlog=512 00:23:57.576 verify_state_save=0 00:23:57.576 do_verify=1 00:23:57.576 verify=crc32c-intel 00:23:57.576 [job0] 00:23:57.576 filename=/dev/nvme0n1 00:23:57.576 Could not set queue depth (nvme0n1) 00:23:57.576 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:57.576 fio-3.35 00:23:57.576 Starting 1 thread 00:23:58.156 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:58.414 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:58.673 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.930 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:59.189 17:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 71596 00:24:03.397 00:24:03.397 job0: (groupid=0, jobs=1): err= 0: pid=71617: Mon Jul 22 17:02:05 2024 00:24:03.397 read: IOPS=9203, BW=35.9MiB/s (37.7MB/s)(216MiB/6008msec) 00:24:03.397 slat (usec): min=3, max=7030, avg=64.87, stdev=269.31 00:24:03.397 clat (usec): min=1688, max=18710, avg=9584.86, stdev=1827.21 00:24:03.397 lat (usec): min=1702, max=18724, avg=9649.72, stdev=1832.96 00:24:03.397 clat percentiles (usec): 00:24:03.397 | 1.00th=[ 5014], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8586], 00:24:03.397 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:24:03.397 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11338], 95.00th=[13698], 00:24:03.397 | 99.00th=[15533], 99.50th=[15926], 99.90th=[17695], 99.95th=[17957], 00:24:03.397 | 99.99th=[18744] 00:24:03.397 bw ( KiB/s): min= 8656, max=23184, per=50.57%, avg=18616.42, stdev=4205.51, samples=12 00:24:03.397 iops : min= 2164, max= 5796, avg=4654.08, stdev=1051.36, samples=12 00:24:03.397 write: IOPS=5206, BW=20.3MiB/s (21.3MB/s)(110MiB/5392msec); 0 zone resets 00:24:03.397 slat (usec): min=4, max=4861, avg=74.57, stdev=188.74 00:24:03.397 clat (usec): min=1249, max=17926, avg=8259.74, stdev=1604.44 00:24:03.397 lat (usec): min=1286, max=18317, avg=8334.31, stdev=1609.52 00:24:03.397 clat percentiles (usec): 00:24:03.397 | 1.00th=[ 3851], 5.00th=[ 4948], 10.00th=[ 5997], 20.00th=[ 7504], 00:24:03.397 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:24:03.397 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10028], 00:24:03.397 | 99.00th=[13435], 99.50th=[14353], 99.90th=[16188], 99.95th=[17171], 00:24:03.397 | 99.99th=[17957] 00:24:03.397 bw ( KiB/s): min= 9080, max=22536, per=89.64%, avg=18667.75, stdev=3969.49, samples=12 00:24:03.397 iops : min= 2270, max= 5634, avg=4666.83, stdev=992.32, samples=12 00:24:03.397 lat (msec) : 2=0.01%, 4=0.64%, 10=79.37%, 20=19.98% 00:24:03.397 cpu : usr=4.86%, sys=19.69%, ctx=4763, majf=0, minf=72 00:24:03.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:03.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.397 issued rwts: total=55293,28072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.397 00:24:03.397 Run status group 0 (all jobs): 00:24:03.397 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=216MiB (226MB), run=6008-6008msec 00:24:03.397 WRITE: bw=20.3MiB/s (21.3MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=110MiB (115MB), run=5392-5392msec 00:24:03.397 00:24:03.397 Disk stats (read/write): 00:24:03.397 nvme0n1: ios=54601/27647, merge=0/0, ticks=500291/214264, in_queue=714555, util=98.58% 00:24:03.397 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:03.964 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=71698 00:24:04.223 17:02:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:24:04.223 [global] 00:24:04.223 thread=1 00:24:04.223 invalidate=1 00:24:04.223 rw=randrw 00:24:04.223 time_based=1 00:24:04.223 runtime=6 00:24:04.223 ioengine=libaio 00:24:04.223 direct=1 00:24:04.223 bs=4096 00:24:04.223 iodepth=128 00:24:04.223 norandommap=0 00:24:04.223 numjobs=1 00:24:04.223 00:24:04.223 verify_dump=1 00:24:04.223 verify_backlog=512 00:24:04.223 verify_state_save=0 00:24:04.223 do_verify=1 00:24:04.223 verify=crc32c-intel 00:24:04.223 [job0] 00:24:04.223 filename=/dev/nvme0n1 00:24:04.223 Could not set queue depth (nvme0n1) 00:24:04.482 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:04.482 fio-3.35 00:24:04.482 Starting 1 thread 00:24:05.413 17:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:05.413 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:05.671 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.929 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:06.187 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 71698 00:24:11.465 00:24:11.465 job0: (groupid=0, jobs=1): err= 0: pid=71719: Mon Jul 22 17:02:12 2024 00:24:11.465 read: IOPS=9688, BW=37.8MiB/s (39.7MB/s)(227MiB/6008msec) 00:24:11.465 slat (usec): min=5, max=10180, avg=54.47, stdev=273.78 00:24:11.465 clat (usec): min=291, max=86120, avg=9256.16, stdev=4632.75 00:24:11.465 lat (usec): min=336, max=86137, avg=9310.63, stdev=4665.36 00:24:11.465 clat percentiles (usec): 00:24:11.465 | 1.00th=[ 1811], 5.00th=[ 4293], 10.00th=[ 5538], 20.00th=[ 7111], 00:24:11.465 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:11.465 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12387], 95.00th=[13698], 00:24:11.465 | 99.00th=[31851], 99.50th=[40109], 99.90th=[67634], 99.95th=[77071], 00:24:11.465 | 99.99th=[83362] 00:24:11.465 bw ( KiB/s): min= 5464, max=31808, per=51.61%, avg=20001.91, stdev=7518.62, samples=11 00:24:11.465 iops : min= 1366, max= 7952, avg=5000.45, stdev=1879.65, samples=11 00:24:11.465 write: IOPS=5684, BW=22.2MiB/s (23.3MB/s)(116MiB/5206msec); 0 zone resets 00:24:11.465 slat (usec): min=11, max=11467, avg=60.75, stdev=207.23 00:24:11.465 clat (usec): min=335, max=78947, avg=7575.02, stdev=4307.15 00:24:11.465 lat (usec): min=364, max=78966, avg=7635.77, stdev=4339.34 00:24:11.465 clat percentiles (usec): 00:24:11.465 | 1.00th=[ 1516], 5.00th=[ 3228], 10.00th=[ 4080], 20.00th=[ 5014], 00:24:11.465 | 30.00th=[ 6063], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8356], 00:24:11.465 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10290], 00:24:11.465 | 99.00th=[13960], 99.50th=[35390], 99.90th=[67634], 99.95th=[68682], 00:24:11.465 | 99.99th=[79168] 00:24:11.465 bw ( KiB/s): min= 5824, max=31384, per=88.34%, avg=20085.27, stdev=7356.68, samples=11 00:24:11.465 iops : min= 1456, max= 7846, avg=5021.27, stdev=1839.16, samples=11 00:24:11.465 lat (usec) : 500=0.03%, 750=0.10%, 1000=0.15% 00:24:11.465 lat (msec) : 2=1.01%, 4=4.55%, 10=74.54%, 20=18.60%, 50=0.76% 00:24:11.465 lat (msec) : 100=0.25% 00:24:11.465 cpu : usr=4.64%, sys=19.74%, ctx=5216, majf=0, minf=96 00:24:11.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:11.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:11.465 issued rwts: total=58210,29591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:11.465 00:24:11.465 Run status group 0 (all jobs): 00:24:11.465 READ: bw=37.8MiB/s (39.7MB/s), 37.8MiB/s-37.8MiB/s (39.7MB/s-39.7MB/s), io=227MiB (238MB), run=6008-6008msec 00:24:11.465 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=116MiB (121MB), run=5206-5206msec 00:24:11.465 00:24:11.465 Disk stats (read/write): 00:24:11.465 nvme0n1: ios=57413/29080, merge=0/0, ticks=511376/207212, in_queue=718588, util=98.67% 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:11.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.465 rmmod nvme_tcp 00:24:11.465 rmmod nvme_fabrics 00:24:11.465 rmmod nvme_keyring 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 71506 ']' 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 71506 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 71506 ']' 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 71506 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71506 00:24:11.465 killing process with pid 71506 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71506' 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 71506 00:24:11.465 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 71506 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:12.840 ************************************ 00:24:12.840 END TEST nvmf_target_multipath 00:24:12.840 ************************************ 00:24:12.840 00:24:12.840 real 0m21.013s 00:24:12.840 user 1m14.516s 00:24:12.840 sys 0m10.931s 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:24:12.840 ************************************ 00:24:12.840 START TEST nvmf_zcopy 00:24:12.840 ************************************ 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:12.840 * Looking for test storage... 00:24:12.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:12.840 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:13.107 Cannot find device "nvmf_tgt_br" 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:13.107 Cannot find device "nvmf_tgt_br2" 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:13.107 Cannot find device "nvmf_tgt_br" 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:13.107 Cannot find device "nvmf_tgt_br2" 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:13.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:13.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:13.107 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:13.370 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:13.370 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:13.370 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:13.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:24:13.371 00:24:13.371 --- 10.0.0.2 ping statistics --- 00:24:13.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.371 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:13.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:13.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:24:13.371 00:24:13.371 --- 10.0.0.3 ping statistics --- 00:24:13.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.371 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:13.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:13.371 00:24:13.371 --- 10.0.0.1 ping statistics --- 00:24:13.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.371 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=71983 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 71983 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 71983 ']' 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.371 17:02:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:13.629 [2024-07-22 17:02:14.995225] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:13.629 [2024-07-22 17:02:14.995415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.629 [2024-07-22 17:02:15.184597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.196 [2024-07-22 17:02:15.527519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.196 [2024-07-22 17:02:15.527594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.196 [2024-07-22 17:02:15.527615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.196 [2024-07-22 17:02:15.527635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.196 [2024-07-22 17:02:15.527662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.196 [2024-07-22 17:02:15.527738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.196 [2024-07-22 17:02:15.795953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:14.455 17:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.455 17:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:24:14.455 17:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:14.455 17:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:14.455 17:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.455 [2024-07-22 17:02:16.025755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.455 [2024-07-22 17:02:16.041942] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.455 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.712 malloc0 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.712 { 00:24:14.712 "params": { 00:24:14.712 "name": "Nvme$subsystem", 00:24:14.712 "trtype": "$TEST_TRANSPORT", 00:24:14.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.712 "adrfam": "ipv4", 00:24:14.712 "trsvcid": "$NVMF_PORT", 00:24:14.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.712 "hdgst": ${hdgst:-false}, 00:24:14.712 "ddgst": ${ddgst:-false} 00:24:14.712 }, 00:24:14.712 "method": "bdev_nvme_attach_controller" 00:24:14.712 } 00:24:14.712 EOF 00:24:14.712 )") 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:24:14.712 17:02:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:14.712 "params": { 00:24:14.712 "name": "Nvme1", 00:24:14.712 "trtype": "tcp", 00:24:14.712 "traddr": "10.0.0.2", 00:24:14.712 "adrfam": "ipv4", 00:24:14.712 "trsvcid": "4420", 00:24:14.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.712 "hdgst": false, 00:24:14.712 "ddgst": false 00:24:14.712 }, 00:24:14.713 "method": "bdev_nvme_attach_controller" 00:24:14.713 }' 00:24:14.713 [2024-07-22 17:02:16.267913] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:14.713 [2024-07-22 17:02:16.268132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72016 ] 00:24:14.971 [2024-07-22 17:02:16.447201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.229 [2024-07-22 17:02:16.816119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.488 [2024-07-22 17:02:17.097012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:15.746 Running I/O for 10 seconds... 00:24:25.733 00:24:25.733 Latency(us) 00:24:25.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:24:25.733 Verification LBA range: start 0x0 length 0x1000 00:24:25.733 Nvme1n1 : 10.01 5654.88 44.18 0.00 0.00 22572.66 2246.95 31956.60 00:24:25.733 =================================================================================================================== 00:24:25.733 Total : 5654.88 44.18 0.00 0.00 22572.66 2246.95 31956.60 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=72155 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:27.109 { 00:24:27.109 "params": { 00:24:27.109 "name": "Nvme$subsystem", 00:24:27.109 "trtype": "$TEST_TRANSPORT", 00:24:27.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.109 "adrfam": "ipv4", 00:24:27.109 "trsvcid": "$NVMF_PORT", 00:24:27.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.109 "hdgst": ${hdgst:-false}, 00:24:27.109 "ddgst": ${ddgst:-false} 00:24:27.109 }, 00:24:27.109 "method": "bdev_nvme_attach_controller" 00:24:27.109 } 00:24:27.109 EOF 00:24:27.109 )") 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:24:27.109 [2024-07-22 17:02:28.669419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.109 [2024-07-22 17:02:28.669476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:24:27.109 17:02:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:27.109 "params": { 00:24:27.109 "name": "Nvme1", 00:24:27.109 "trtype": "tcp", 00:24:27.109 "traddr": "10.0.0.2", 00:24:27.109 "adrfam": "ipv4", 00:24:27.109 "trsvcid": "4420", 00:24:27.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:27.109 "hdgst": false, 00:24:27.109 "ddgst": false 00:24:27.109 }, 00:24:27.109 "method": "bdev_nvme_attach_controller" 00:24:27.109 }' 00:24:27.109 [2024-07-22 17:02:28.677436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.109 [2024-07-22 17:02:28.677488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.109 [2024-07-22 17:02:28.689412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.109 [2024-07-22 17:02:28.689476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.109 [2024-07-22 17:02:28.701436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.109 [2024-07-22 17:02:28.701494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.109 [2024-07-22 17:02:28.713489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.109 [2024-07-22 17:02:28.713549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.109 [2024-07-22 17:02:28.725454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.109 [2024-07-22 17:02:28.725507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.737452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.737522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.749446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.749498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.761441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.761512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.773469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.773524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.781560] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:27.368 [2024-07-22 17:02:28.781733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72155 ] 00:24:27.368 [2024-07-22 17:02:28.785427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.785488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.797445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.797494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.809428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.809481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.821419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.821467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.833447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.833504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.845436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.845482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.857447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.857514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.869478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.869530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.881447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.881500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.893478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.893524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.905483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.905537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.917491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.917542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.929478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.929528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.941511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.941560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.957479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.957531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.368 [2024-07-22 17:02:28.968556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.368 [2024-07-22 17:02:28.973521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.368 [2024-07-22 17:02:28.973573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:28.985505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:28.985566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:28.997557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:28.997633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.009563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.009629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.021511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.021571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.033545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.033625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.045561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.045620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.057555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.057614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.069607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.069664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.081575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.081633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.093586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.093639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.105570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.105625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.117550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.117606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.129612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.129674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.141617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.141682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.153582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.153663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.165613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.165673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.177595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.177664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.189630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.189690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.201611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.201672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.213634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.213691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.225627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.225703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.237606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.237657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.238158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.697 [2024-07-22 17:02:29.249591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.249648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.261620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.261677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.273606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.273662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.285649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.285707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.297644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.297703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.697 [2024-07-22 17:02:29.309638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.697 [2024-07-22 17:02:29.309697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.321672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.321796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.333663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.333722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.345644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.345712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.357708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.357772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.369648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.369715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.381674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.381734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.393682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.393748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.405659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.955 [2024-07-22 17:02:29.405718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.955 [2024-07-22 17:02:29.417693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.417759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.429703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.429763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.441665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.441722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.453698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.453753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.465667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.465749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.477710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.477772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.489721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.489787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.501733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.501793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.513730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.513797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.516454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:27.956 [2024-07-22 17:02:29.525745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.525811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.537704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.537785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.549742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.549800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:27.956 [2024-07-22 17:02:29.561698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:27.956 [2024-07-22 17:02:29.561760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.573816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.573873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.585743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.585809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.597730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.597787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.609751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.609812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.621754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.621810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.633740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.633808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.645800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.645864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.657745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.657811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.669760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.669840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.681775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.681829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.693787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.693877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.705813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.705869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.717811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.214 [2024-07-22 17:02:29.717867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.214 [2024-07-22 17:02:29.729903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.215 [2024-07-22 17:02:29.729957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.215 Running I/O for 5 seconds... 00:24:28.215 [2024-07-22 17:02:29.741897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.215 [2024-07-22 17:02:29.741956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.215 [2024-07-22 17:02:29.758868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.215 [2024-07-22 17:02:29.758933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.215 [2024-07-22 17:02:29.775785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.215 [2024-07-22 17:02:29.775845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.215 [2024-07-22 17:02:29.792604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.215 [2024-07-22 17:02:29.792667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.215 [2024-07-22 17:02:29.817589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.215 [2024-07-22 17:02:29.817655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.833133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.833210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.848578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.848653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.858299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.858368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.874266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.874338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.891453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.891545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.907643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.907718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.924745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.924816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.941950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.942015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.957885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.957957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.968238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.968318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:29.982911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:29.983001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:30.000604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:30.000674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:30.016178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:30.016262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:30.032754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:30.032829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:30.048863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:30.048937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:30.067572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:30.067645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.473 [2024-07-22 17:02:30.083295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.473 [2024-07-22 17:02:30.083370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.100292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.100363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.118726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.118816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.134423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.134489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.151145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.151219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.166728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.166794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.183261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.183328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.200319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.200401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.218425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.218495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.232105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.232165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.248764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.248835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.266411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.266476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.281056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.281129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.298594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.298658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.314264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.314327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.731 [2024-07-22 17:02:30.331454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.731 [2024-07-22 17:02:30.331511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.349198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.349273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.365353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.365419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.381783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.381851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.391586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.391642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.407494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.407571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.429048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.429119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.444981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.445080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.461161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.461224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.471305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.471366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.487503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.487566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.504983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.505044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.520360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.520420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.529956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.530014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.990 [2024-07-22 17:02:30.545841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.990 [2024-07-22 17:02:30.545904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.991 [2024-07-22 17:02:30.563259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.991 [2024-07-22 17:02:30.563327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.991 [2024-07-22 17:02:30.579334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.991 [2024-07-22 17:02:30.579391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.991 [2024-07-22 17:02:30.589642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.991 [2024-07-22 17:02:30.589700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:28.991 [2024-07-22 17:02:30.605929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:28.991 [2024-07-22 17:02:30.605992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.622899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.622967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.638455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.638508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.648937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.649012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.665039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.665099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.682176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.682259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.698575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.698639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.716559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.716624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.733320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.733384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.752082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.752147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.767927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.767991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.786007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.786072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.253 [2024-07-22 17:02:30.801577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.253 [2024-07-22 17:02:30.801647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.254 [2024-07-22 17:02:30.820015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.254 [2024-07-22 17:02:30.820082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.254 [2024-07-22 17:02:30.840886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.254 [2024-07-22 17:02:30.840951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.254 [2024-07-22 17:02:30.855817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.254 [2024-07-22 17:02:30.855895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.873028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.873096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.889145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.889204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.899178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.899234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.915211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.915304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.932766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.932830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.947629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.947712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.964375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.964447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.980551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.980617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:30.997215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:30.997286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:31.014239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:31.014305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:31.031735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:31.031797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:31.047665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:31.047742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:31.064028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:31.064093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:31.082191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:31.082273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:31.097163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:31.097230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.511 [2024-07-22 17:02:31.113330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.511 [2024-07-22 17:02:31.113400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.130335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.130404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.146265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.146329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.163644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.163743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.180046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.180118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.196814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.196886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.213556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.213621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.230605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.230687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.246527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.246596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.263450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.263507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.280018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.280077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.296926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.296996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.313786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.313862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.330382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.330451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.347088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.347148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.365461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.365521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.770 [2024-07-22 17:02:31.380643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:29.770 [2024-07-22 17:02:31.380709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.397241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.397314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.413903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.413967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.430889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.430943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.447812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.447881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.464791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.464852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.480844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.480904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.497692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.497747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.515315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.515377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.530309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.530371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.547507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.547573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.564730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.564806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.580818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.580893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.597623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.597685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.614652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.614712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.028 [2024-07-22 17:02:31.630997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.028 [2024-07-22 17:02:31.631057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.647931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.647994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.664456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.664518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.681982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.682052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.697584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.697644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.707117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.707172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.725684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.725740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.741171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.741228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.757445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.757501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.776398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.776473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.792209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.792283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.808327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.808396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.823772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.823863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.840636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.840722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.857749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.857816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.875001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.875066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.286 [2024-07-22 17:02:31.891608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.286 [2024-07-22 17:02:31.891681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:31.908124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:31.908190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:31.925043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:31.925105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:31.940808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:31.940871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:31.950490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:31.950545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:31.966681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:31.966739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:31.983013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:31.983071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:32.000720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:32.000778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:32.015173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:32.015230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:32.032568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:32.032631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:32.047356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:32.047422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:32.063487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:32.063548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.544 [2024-07-22 17:02:32.082920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.544 [2024-07-22 17:02:32.082988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.545 [2024-07-22 17:02:32.098418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.545 [2024-07-22 17:02:32.098483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.545 [2024-07-22 17:02:32.108818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.545 [2024-07-22 17:02:32.108875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.545 [2024-07-22 17:02:32.124361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.545 [2024-07-22 17:02:32.124416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.545 [2024-07-22 17:02:32.141537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.545 [2024-07-22 17:02:32.141591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.545 [2024-07-22 17:02:32.159916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.545 [2024-07-22 17:02:32.159974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.175190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.175281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.185383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.185450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.201251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.201330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.217672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.217738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.233957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.234021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.251172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.251241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.267624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.267776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.284275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.284349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.300154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.300220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.316282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.316351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.333329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.333402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.350626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.350695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.366881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.366947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.384964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.385029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:30.802 [2024-07-22 17:02:32.401072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:30.802 [2024-07-22 17:02:32.401136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.419381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.419443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.434605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.434664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.451168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.451254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.467506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.467575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.477634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.477708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.492763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.492846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.507614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.507678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.523780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.523864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.540981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.541044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.559044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.559105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.574754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.574819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.591503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.591572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.608276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.608351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.624401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.624458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.634509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.634581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.650585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.650639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.665858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.665919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.060 [2024-07-22 17:02:32.675822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.060 [2024-07-22 17:02:32.675875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.692948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.693023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.708983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.709050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.725911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.725986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.742339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.742406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.759369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.759429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.775616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.775717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.793943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.794012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.809220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.809291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.826852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.826917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.841719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.841778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.858370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.858435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.874056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.874113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.885631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.885685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.901097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.901163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.918538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.918601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.319 [2024-07-22 17:02:32.935772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.319 [2024-07-22 17:02:32.935834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:32.951619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:32.951687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:32.969645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:32.969711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:32.985137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:32.985196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.004069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.004133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.019428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.019488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.036512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.036574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.052966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.053030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.070029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.070091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.088444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.088517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.104053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.104126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.114329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.114389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.129987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.130048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.146007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.146067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.162415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.162478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.578 [2024-07-22 17:02:33.179342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.578 [2024-07-22 17:02:33.179403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.195957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.196028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.212227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.212327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.231123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.231195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.246365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.246448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.262928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.262991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.279485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.279551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.297186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.297280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.313837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.313906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.330410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.330468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.345980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.346039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.363894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.363959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.379392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.379459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.389243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.389317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.405484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.405548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.422408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.422467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:31.845 [2024-07-22 17:02:33.439409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:31.845 [2024-07-22 17:02:33.439471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.461047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.461122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.477247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.477347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.495093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.495171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.509199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.509270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.525506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.525566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.542246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.542321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.558446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.558507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.575461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.575524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.592136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.592202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.609866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.609934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.625579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.625648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.635181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.635264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.651507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.651574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.668068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.668132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.685008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.685071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.702508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.702572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.108 [2024-07-22 17:02:33.716823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.108 [2024-07-22 17:02:33.716885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.733272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.733330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.749431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.749493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.760660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.760738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.777971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.778042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.793781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.793850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.804450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.804513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.819747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.819818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.836184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.836260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.853740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.853805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.870029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.870096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.887068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.887134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.903428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.903508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.919433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.919492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.938057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.938121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.952730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.952805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.968545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.968612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.366 [2024-07-22 17:02:33.980029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.366 [2024-07-22 17:02:33.980097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:33.997524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:33.997598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.011675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.011737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.028186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.028258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.045175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.045238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.062128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.062190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.077720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.077785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.087872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.087928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.102386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.102443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.117775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.117835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.134080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.134138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.150898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.150962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.167237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.167308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.183842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.183908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.201184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.201258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.217681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.217739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.624 [2024-07-22 17:02:34.234329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.624 [2024-07-22 17:02:34.234386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.883 [2024-07-22 17:02:34.251594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.883 [2024-07-22 17:02:34.251666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.883 [2024-07-22 17:02:34.266785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.883 [2024-07-22 17:02:34.266853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.883 [2024-07-22 17:02:34.282666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.883 [2024-07-22 17:02:34.282726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.292675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.292753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.307121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.307199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.322959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.323023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.340228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.340319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.355975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.356053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.366084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.366144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.380366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.380423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.396868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.396927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.413643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.413701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.429433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.429489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.439080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.439131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.454965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.455025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.465167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.465222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.480179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.480239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:32.884 [2024-07-22 17:02:34.496317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:32.884 [2024-07-22 17:02:34.496372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.514468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.514525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.530057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.530117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.546562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.546623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.564863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.564925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.580392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.580452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.597807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.597864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.613622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.613678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.631828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.631894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.645952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.646008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.662475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.662533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.679085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.679144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.695905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.695960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.711877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.711927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.730479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.730536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.744806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.744873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 00:24:33.182 Latency(us) 00:24:33.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.182 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:24:33.182 Nvme1n1 : 5.01 11263.06 87.99 0.00 0.00 11349.51 4306.65 21595.67 00:24:33.182 =================================================================================================================== 00:24:33.182 Total : 11263.06 87.99 0.00 0.00 11349.51 4306.65 21595.67 00:24:33.182 [2024-07-22 17:02:34.754859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.754910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.766880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.766930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.778920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.778994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.182 [2024-07-22 17:02:34.790946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.182 [2024-07-22 17:02:34.790998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.442 [2024-07-22 17:02:34.802952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.442 [2024-07-22 17:02:34.803010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.442 [2024-07-22 17:02:34.814940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.442 [2024-07-22 17:02:34.814988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.442 [2024-07-22 17:02:34.826939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.442 [2024-07-22 17:02:34.826986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.442 [2024-07-22 17:02:34.838943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.442 [2024-07-22 17:02:34.838990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.442 [2024-07-22 17:02:34.850915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.442 [2024-07-22 17:02:34.850963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.442 [2024-07-22 17:02:34.862954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.442 [2024-07-22 17:02:34.863004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.874957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.875008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.886929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.886983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.898965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.899019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.910967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.911020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.922961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.923010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.934986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.935037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.946953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.947005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.959036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.959101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.970984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.971034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.982950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.983001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:34.994992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:34.995046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:35.007001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:35.007053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:35.018957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:35.019002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:35.030984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:35.031029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:35.042953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:35.042997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.443 [2024-07-22 17:02:35.055015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.443 [2024-07-22 17:02:35.055061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.067020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.702 [2024-07-22 17:02:35.067078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.079024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.702 [2024-07-22 17:02:35.079074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.091020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.702 [2024-07-22 17:02:35.091071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.103033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.702 [2024-07-22 17:02:35.103084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.115015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.702 [2024-07-22 17:02:35.115077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.127046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.702 [2024-07-22 17:02:35.127107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.139039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.702 [2024-07-22 17:02:35.139102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.702 [2024-07-22 17:02:35.151057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.151113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.163059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.163113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.175048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.175117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.187082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.187139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.199075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.199128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.211060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.211127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.223090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.223142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.235075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.235123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.247101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.247154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.259101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.259153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.271090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.271146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.283123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.283183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.295140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.295192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.307096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.307154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.703 [2024-07-22 17:02:35.319125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.703 [2024-07-22 17:02:35.319181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.965 [2024-07-22 17:02:35.331089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.965 [2024-07-22 17:02:35.331139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.965 [2024-07-22 17:02:35.351144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.965 [2024-07-22 17:02:35.351206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.965 [2024-07-22 17:02:35.363159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.965 [2024-07-22 17:02:35.363214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.965 [2024-07-22 17:02:35.375142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.965 [2024-07-22 17:02:35.375195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.965 [2024-07-22 17:02:35.387137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.965 [2024-07-22 17:02:35.387191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.965 [2024-07-22 17:02:35.399158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.965 [2024-07-22 17:02:35.399213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.411126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.411180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.423169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.423221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.435144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.435197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.447177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.447235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.459199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.459288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.471153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.471205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.483179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.483231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.495165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.495213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.507150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.507202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.519212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.519273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.531163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.531213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.543200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.543277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.555220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.555296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.567172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.567220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:33.966 [2024-07-22 17:02:35.579217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:33.966 [2024-07-22 17:02:35.579283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.591241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.591308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.603289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.603338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.615231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.615312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.627185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.627235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.639228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.639312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.651269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.651323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.663284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.663334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.675273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.675325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.687276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.687330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.699241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.699305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.711284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.711335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.723257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.723307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.735290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.735341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.747289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.747342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.759273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.759326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.771316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.771369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.783317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.783370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.795286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.795337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.807362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.807414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.819310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.819358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.831319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.831371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.226 [2024-07-22 17:02:35.843336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.226 [2024-07-22 17:02:35.843391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.855312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.855367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.867374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.867430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.879348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.879404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.891329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.891386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.903362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.903417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.915344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.915418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.927369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.927425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.939366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.939432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.951376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.951438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.963396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.963454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.979387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.979443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:35.991373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:35.991433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.003394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.003449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.015370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.015429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.027372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.027426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.039384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.039437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.051369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.051423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.067397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.067465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.079398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.079464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.091368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.091415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.488 [2024-07-22 17:02:36.103410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.488 [2024-07-22 17:02:36.103456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.747 [2024-07-22 17:02:36.115384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.115430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.127410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.127461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.139413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.139462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.151391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.151441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.163416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.163463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.175446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.175500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.187402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.187450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.199455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.199512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.211418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.211472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.223474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.223527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 [2024-07-22 17:02:36.235472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.748 [2024-07-22 17:02:36.235527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.748 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (72155) - No such process 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 72155 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:34.748 delay0 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.748 17:02:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:24:35.006 [2024-07-22 17:02:36.501092] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:24:41.568 Initializing NVMe Controllers 00:24:41.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.568 Initialization complete. Launching workers. 00:24:41.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 76 00:24:41.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 33 00:24:41.568 success 266, unsuccess 97, failed 0 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.568 rmmod nvme_tcp 00:24:41.568 rmmod nvme_fabrics 00:24:41.568 rmmod nvme_keyring 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 71983 ']' 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 71983 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 71983 ']' 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 71983 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71983 00:24:41.568 killing process with pid 71983 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71983' 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 71983 00:24:41.568 17:02:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 71983 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:43.018 00:24:43.018 real 0m29.953s 00:24:43.018 user 0m49.269s 00:24:43.018 sys 0m7.958s 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:43.018 ************************************ 00:24:43.018 END TEST nvmf_zcopy 00:24:43.018 ************************************ 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:24:43.018 17:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:24:43.019 ************************************ 00:24:43.019 START TEST nvmf_nmic 00:24:43.019 ************************************ 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:24:43.019 * Looking for test storage... 00:24:43.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:43.019 Cannot find device "nvmf_tgt_br" 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:43.019 Cannot find device "nvmf_tgt_br2" 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:43.019 Cannot find device "nvmf_tgt_br" 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:43.019 Cannot find device "nvmf_tgt_br2" 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:24:43.019 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:43.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:43.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:43.020 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:43.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:24:43.279 00:24:43.279 --- 10.0.0.2 ping statistics --- 00:24:43.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.279 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:43.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:43.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:24:43.279 00:24:43.279 --- 10.0.0.3 ping statistics --- 00:24:43.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.279 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:43.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:43.279 00:24:43.279 --- 10.0.0.1 ping statistics --- 00:24:43.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.279 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=72507 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 72507 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 72507 ']' 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.279 17:02:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:43.537 [2024-07-22 17:02:44.917339] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:43.537 [2024-07-22 17:02:44.917875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.537 [2024-07-22 17:02:45.079150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.796 [2024-07-22 17:02:45.328176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.796 [2024-07-22 17:02:45.328240] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.796 [2024-07-22 17:02:45.328262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.796 [2024-07-22 17:02:45.328277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.796 [2024-07-22 17:02:45.328308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.796 [2024-07-22 17:02:45.328516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.796 [2024-07-22 17:02:45.328637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.796 [2024-07-22 17:02:45.328963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.796 [2024-07-22 17:02:45.328970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.054 [2024-07-22 17:02:45.591643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.325 [2024-07-22 17:02:45.880804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.325 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 Malloc0 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.587 17:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 [2024-07-22 17:02:46.001335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.587 test case1: single bdev can't be used in multiple subsystems 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 [2024-07-22 17:02:46.025122] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:24:44.587 [2024-07-22 17:02:46.025182] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:24:44.587 [2024-07-22 17:02:46.025198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:44.587 request: 00:24:44.587 { 00:24:44.587 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:24:44.587 "namespace": { 00:24:44.587 "bdev_name": "Malloc0", 00:24:44.587 "no_auto_visible": false 00:24:44.587 }, 00:24:44.587 "method": "nvmf_subsystem_add_ns", 00:24:44.587 "req_id": 1 00:24:44.587 } 00:24:44.587 Got JSON-RPC error response 00:24:44.587 response: 00:24:44.587 { 00:24:44.587 "code": -32602, 00:24:44.587 "message": "Invalid parameters" 00:24:44.587 } 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:24:44.587 Adding namespace failed - expected result. 00:24:44.587 test case2: host connect to nvmf target in multiple paths 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:44.587 [2024-07-22 17:02:46.037325] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:44.587 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:24:44.846 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:24:44.846 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:24:44.846 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.846 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:44.846 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:24:46.882 17:02:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:46.882 17:02:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:46.882 17:02:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:46.882 17:02:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:46.882 17:02:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.882 17:02:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:24:46.882 17:02:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:46.882 [global] 00:24:46.882 thread=1 00:24:46.882 invalidate=1 00:24:46.882 rw=write 00:24:46.882 time_based=1 00:24:46.882 runtime=1 00:24:46.882 ioengine=libaio 00:24:46.882 direct=1 00:24:46.882 bs=4096 00:24:46.882 iodepth=1 00:24:46.882 norandommap=0 00:24:46.882 numjobs=1 00:24:46.882 00:24:46.882 verify_dump=1 00:24:46.882 verify_backlog=512 00:24:46.882 verify_state_save=0 00:24:46.882 do_verify=1 00:24:46.882 verify=crc32c-intel 00:24:46.882 [job0] 00:24:46.882 filename=/dev/nvme0n1 00:24:46.882 Could not set queue depth (nvme0n1) 00:24:46.882 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:46.882 fio-3.35 00:24:46.882 Starting 1 thread 00:24:48.263 00:24:48.263 job0: (groupid=0, jobs=1): err= 0: pid=72599: Mon Jul 22 17:02:49 2024 00:24:48.263 read: IOPS=2636, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:24:48.263 slat (nsec): min=7644, max=59480, avg=10221.59, stdev=2640.92 00:24:48.263 clat (usec): min=146, max=2394, avg=201.98, stdev=48.99 00:24:48.263 lat (usec): min=154, max=2403, avg=212.20, stdev=49.26 00:24:48.263 clat percentiles (usec): 00:24:48.263 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:24:48.263 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:24:48.263 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 237], 00:24:48.263 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 529], 99.95th=[ 676], 00:24:48.263 | 99.99th=[ 2409] 00:24:48.263 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:24:48.263 slat (usec): min=11, max=203, avg=15.98, stdev= 5.87 00:24:48.263 clat (usec): min=89, max=353, avg=125.22, stdev=15.10 00:24:48.263 lat (usec): min=101, max=557, avg=141.20, stdev=17.50 00:24:48.263 clat percentiles (usec): 00:24:48.263 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 110], 20.00th=[ 114], 00:24:48.263 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 128], 00:24:48.263 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 151], 00:24:48.263 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 202], 99.95th=[ 219], 00:24:48.263 | 99.99th=[ 355] 00:24:48.263 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:24:48.263 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:24:48.263 lat (usec) : 100=1.24%, 250=97.81%, 500=0.89%, 750=0.04% 00:24:48.263 lat (msec) : 4=0.02% 00:24:48.263 cpu : usr=1.30%, sys=6.30%, ctx=5711, majf=0, minf=2 00:24:48.263 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:48.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.263 issued rwts: total=2639,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.263 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:48.263 00:24:48.263 Run status group 0 (all jobs): 00:24:48.263 READ: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=10.3MiB (10.8MB), run=1001-1001msec 00:24:48.263 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:24:48.263 00:24:48.263 Disk stats (read/write): 00:24:48.263 nvme0n1: ios=2555/2560, merge=0/0, ticks=540/343, in_queue=883, util=91.58% 00:24:48.263 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:48.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:48.263 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:48.264 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:48.522 rmmod nvme_tcp 00:24:48.522 rmmod nvme_fabrics 00:24:48.522 rmmod nvme_keyring 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 72507 ']' 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 72507 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 72507 ']' 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 72507 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72507 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:48.522 killing process with pid 72507 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72507' 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 72507 00:24:48.522 17:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 72507 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:50.433 00:24:50.433 real 0m7.498s 00:24:50.433 user 0m22.531s 00:24:50.433 sys 0m2.690s 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:50.433 ************************************ 00:24:50.433 END TEST nvmf_nmic 00:24:50.433 ************************************ 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:24:50.433 ************************************ 00:24:50.433 START TEST nvmf_fio_target 00:24:50.433 ************************************ 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:50.433 * Looking for test storage... 00:24:50.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:50.433 17:02:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.433 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:50.434 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:50.691 Cannot find device "nvmf_tgt_br" 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.691 Cannot find device "nvmf_tgt_br2" 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:50.691 Cannot find device "nvmf_tgt_br" 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:50.691 Cannot find device "nvmf_tgt_br2" 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:50.691 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:50.692 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:50.692 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:50.949 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:50.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:24:50.950 00:24:50.950 --- 10.0.0.2 ping statistics --- 00:24:50.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.950 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:50.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:50.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:50.950 00:24:50.950 --- 10.0.0.3 ping statistics --- 00:24:50.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.950 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:50.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:24:50.950 00:24:50.950 --- 10.0.0.1 ping statistics --- 00:24:50.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.950 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=72794 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 72794 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 72794 ']' 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:50.950 17:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.950 [2024-07-22 17:02:52.533987] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:50.950 [2024-07-22 17:02:52.534115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.208 [2024-07-22 17:02:52.710012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.467 [2024-07-22 17:02:52.989601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.467 [2024-07-22 17:02:52.989946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.467 [2024-07-22 17:02:52.990125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.467 [2024-07-22 17:02:52.990302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.467 [2024-07-22 17:02:52.990460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.467 [2024-07-22 17:02:52.990758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.467 [2024-07-22 17:02:52.990897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.467 [2024-07-22 17:02:52.991137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.467 [2024-07-22 17:02:52.991140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.725 [2024-07-22 17:02:53.255024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:51.982 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.982 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:24:51.983 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.983 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.983 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.983 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.983 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:52.241 [2024-07-22 17:02:53.832183] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.515 17:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:52.785 17:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:24:52.785 17:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:53.045 17:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:24:53.045 17:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:53.304 17:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:24:53.304 17:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:53.882 17:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:24:53.882 17:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:24:53.882 17:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:54.144 17:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:24:54.144 17:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:54.403 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:24:54.403 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:54.989 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:24:54.989 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:24:54.989 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:55.248 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:55.248 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.521 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:55.521 17:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:55.521 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.780 [2024-07-22 17:02:57.281144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.780 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:24:56.039 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:24:56.296 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:56.555 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:24:56.555 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.555 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.555 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:24:56.555 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:24:56.555 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:24:58.459 17:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:58.459 17:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:58.459 17:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:58.459 17:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:24:58.459 17:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.459 17:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:24:58.459 17:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:58.459 [global] 00:24:58.459 thread=1 00:24:58.459 invalidate=1 00:24:58.459 rw=write 00:24:58.459 time_based=1 00:24:58.459 runtime=1 00:24:58.459 ioengine=libaio 00:24:58.459 direct=1 00:24:58.459 bs=4096 00:24:58.459 iodepth=1 00:24:58.459 norandommap=0 00:24:58.459 numjobs=1 00:24:58.459 00:24:58.459 verify_dump=1 00:24:58.459 verify_backlog=512 00:24:58.459 verify_state_save=0 00:24:58.459 do_verify=1 00:24:58.459 verify=crc32c-intel 00:24:58.459 [job0] 00:24:58.459 filename=/dev/nvme0n1 00:24:58.459 [job1] 00:24:58.459 filename=/dev/nvme0n2 00:24:58.459 [job2] 00:24:58.459 filename=/dev/nvme0n3 00:24:58.459 [job3] 00:24:58.459 filename=/dev/nvme0n4 00:24:58.459 Could not set queue depth (nvme0n1) 00:24:58.459 Could not set queue depth (nvme0n2) 00:24:58.459 Could not set queue depth (nvme0n3) 00:24:58.459 Could not set queue depth (nvme0n4) 00:24:58.731 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:58.731 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:58.731 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:58.731 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:58.731 fio-3.35 00:24:58.731 Starting 4 threads 00:25:00.105 00:25:00.105 job0: (groupid=0, jobs=1): err= 0: pid=72979: Mon Jul 22 17:03:01 2024 00:25:00.105 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:00.105 slat (nsec): min=7972, max=74668, avg=16212.51, stdev=8313.68 00:25:00.105 clat (usec): min=184, max=1682, avg=345.89, stdev=110.09 00:25:00.105 lat (usec): min=195, max=1731, avg=362.11, stdev=112.67 00:25:00.105 clat percentiles (usec): 00:25:00.105 | 1.00th=[ 200], 5.00th=[ 239], 10.00th=[ 258], 20.00th=[ 277], 00:25:00.105 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 343], 00:25:00.105 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 441], 95.00th=[ 502], 00:25:00.105 | 99.00th=[ 742], 99.50th=[ 848], 99.90th=[ 1532], 99.95th=[ 1680], 00:25:00.105 | 99.99th=[ 1680] 00:25:00.105 write: IOPS=1760, BW=7041KiB/s (7210kB/s)(7048KiB/1001msec); 0 zone resets 00:25:00.105 slat (usec): min=13, max=143, avg=26.38, stdev=10.39 00:25:00.105 clat (usec): min=110, max=1224, avg=221.94, stdev=69.39 00:25:00.105 lat (usec): min=127, max=1243, avg=248.32, stdev=74.15 00:25:00.105 clat percentiles (usec): 00:25:00.106 | 1.00th=[ 119], 5.00th=[ 129], 10.00th=[ 139], 20.00th=[ 169], 00:25:00.106 | 30.00th=[ 194], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 229], 00:25:00.106 | 70.00th=[ 253], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 314], 00:25:00.106 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 1123], 99.95th=[ 1221], 00:25:00.106 | 99.99th=[ 1221] 00:25:00.106 bw ( KiB/s): min= 8175, max= 8175, per=23.53%, avg=8175.00, stdev= 0.00, samples=1 00:25:00.106 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:25:00.106 lat (usec) : 250=40.54%, 500=56.97%, 750=1.94%, 1000=0.33% 00:25:00.106 lat (msec) : 2=0.21% 00:25:00.106 cpu : usr=1.70%, sys=5.50%, ctx=3301, majf=0, minf=8 00:25:00.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:00.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 issued rwts: total=1536,1762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:00.106 job1: (groupid=0, jobs=1): err= 0: pid=72980: Mon Jul 22 17:03:01 2024 00:25:00.106 read: IOPS=1502, BW=6010KiB/s (6154kB/s)(6016KiB/1001msec) 00:25:00.106 slat (usec): min=8, max=606, avg=17.99, stdev=17.82 00:25:00.106 clat (usec): min=196, max=1764, avg=358.46, stdev=92.37 00:25:00.106 lat (usec): min=212, max=1779, avg=376.45, stdev=97.74 00:25:00.106 clat percentiles (usec): 00:25:00.106 | 1.00th=[ 245], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 289], 00:25:00.106 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 343], 60.00th=[ 371], 00:25:00.106 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 469], 95.00th=[ 510], 00:25:00.106 | 99.00th=[ 668], 99.50th=[ 750], 99.90th=[ 938], 99.95th=[ 1762], 00:25:00.106 | 99.99th=[ 1762] 00:25:00.106 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:00.106 slat (usec): min=15, max=498, avg=28.79, stdev=17.04 00:25:00.106 clat (usec): min=109, max=2301, avg=249.54, stdev=99.96 00:25:00.106 lat (usec): min=125, max=2322, avg=278.33, stdev=106.24 00:25:00.106 clat percentiles (usec): 00:25:00.106 | 1.00th=[ 123], 5.00th=[ 137], 10.00th=[ 165], 20.00th=[ 196], 00:25:00.106 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 262], 00:25:00.106 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 338], 95.00th=[ 359], 00:25:00.106 | 99.00th=[ 461], 99.50th=[ 668], 99.90th=[ 1434], 99.95th=[ 2311], 00:25:00.106 | 99.99th=[ 2311] 00:25:00.106 bw ( KiB/s): min= 7512, max= 7512, per=21.62%, avg=7512.00, stdev= 0.00, samples=1 00:25:00.106 iops : min= 1878, max= 1878, avg=1878.00, stdev= 0.00, samples=1 00:25:00.106 lat (usec) : 250=29.14%, 500=67.14%, 750=3.29%, 1000=0.26% 00:25:00.106 lat (msec) : 2=0.13%, 4=0.03% 00:25:00.106 cpu : usr=1.50%, sys=5.50%, ctx=3041, majf=0, minf=11 00:25:00.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:00.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 issued rwts: total=1504,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:00.106 job2: (groupid=0, jobs=1): err= 0: pid=72981: Mon Jul 22 17:03:01 2024 00:25:00.106 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:00.106 slat (nsec): min=7841, max=38260, avg=11392.40, stdev=2932.28 00:25:00.106 clat (usec): min=153, max=1014, avg=193.59, stdev=26.86 00:25:00.106 lat (usec): min=161, max=1026, avg=204.98, stdev=27.79 00:25:00.106 clat percentiles (usec): 00:25:00.106 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:25:00.106 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:25:00.106 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 227], 00:25:00.106 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 330], 99.95th=[ 676], 00:25:00.106 | 99.99th=[ 1012] 00:25:00.106 write: IOPS=2833, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:25:00.106 slat (usec): min=10, max=102, avg=17.39, stdev= 4.69 00:25:00.106 clat (usec): min=105, max=2860, avg=147.86, stdev=55.97 00:25:00.106 lat (usec): min=119, max=2879, avg=165.25, stdev=56.82 00:25:00.106 clat percentiles (usec): 00:25:00.106 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 131], 00:25:00.106 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:25:00.106 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 182], 00:25:00.106 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 326], 99.95th=[ 865], 00:25:00.106 | 99.99th=[ 2868] 00:25:00.106 bw ( KiB/s): min=12263, max=12263, per=35.30%, avg=12263.00, stdev= 0.00, samples=1 00:25:00.106 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:25:00.106 lat (usec) : 250=99.35%, 500=0.57%, 750=0.02%, 1000=0.02% 00:25:00.106 lat (msec) : 2=0.02%, 4=0.02% 00:25:00.106 cpu : usr=1.50%, sys=6.50%, ctx=5396, majf=0, minf=3 00:25:00.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:00.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 issued rwts: total=2560,2836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:00.106 job3: (groupid=0, jobs=1): err= 0: pid=72982: Mon Jul 22 17:03:01 2024 00:25:00.106 read: IOPS=2355, BW=9423KiB/s (9649kB/s)(9432KiB/1001msec) 00:25:00.106 slat (nsec): min=7824, max=44484, avg=12561.77, stdev=3600.02 00:25:00.106 clat (usec): min=166, max=534, avg=213.96, stdev=27.13 00:25:00.106 lat (usec): min=174, max=547, avg=226.52, stdev=28.20 00:25:00.106 clat percentiles (usec): 00:25:00.106 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:25:00.106 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 215], 00:25:00.106 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 260], 00:25:00.106 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[ 486], 99.95th=[ 515], 00:25:00.106 | 99.99th=[ 537] 00:25:00.106 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:25:00.106 slat (usec): min=12, max=148, avg=20.24, stdev= 7.06 00:25:00.106 clat (usec): min=119, max=2246, avg=159.05, stdev=52.06 00:25:00.106 lat (usec): min=132, max=2266, avg=179.29, stdev=53.17 00:25:00.106 clat percentiles (usec): 00:25:00.106 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:25:00.106 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:25:00.106 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 190], 00:25:00.106 | 99.00th=[ 215], 99.50th=[ 269], 99.90th=[ 570], 99.95th=[ 1319], 00:25:00.106 | 99.99th=[ 2245] 00:25:00.106 bw ( KiB/s): min=11153, max=11153, per=32.10%, avg=11153.00, stdev= 0.00, samples=1 00:25:00.106 iops : min= 2788, max= 2788, avg=2788.00, stdev= 0.00, samples=1 00:25:00.106 lat (usec) : 250=95.81%, 500=4.09%, 750=0.06% 00:25:00.106 lat (msec) : 2=0.02%, 4=0.02% 00:25:00.106 cpu : usr=1.60%, sys=6.60%, ctx=4918, majf=0, minf=13 00:25:00.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:00.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.106 issued rwts: total=2358,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:00.106 00:25:00.106 Run status group 0 (all jobs): 00:25:00.106 READ: bw=31.1MiB/s (32.6MB/s), 6010KiB/s-9.99MiB/s (6154kB/s-10.5MB/s), io=31.1MiB (32.6MB), run=1001-1001msec 00:25:00.106 WRITE: bw=33.9MiB/s (35.6MB/s), 6138KiB/s-11.1MiB/s (6285kB/s-11.6MB/s), io=34.0MiB (35.6MB), run=1001-1001msec 00:25:00.106 00:25:00.106 Disk stats (read/write): 00:25:00.106 nvme0n1: ios=1221/1536, merge=0/0, ticks=472/368, in_queue=840, util=87.47% 00:25:00.106 nvme0n2: ios=1159/1536, merge=0/0, ticks=444/401, in_queue=845, util=87.70% 00:25:00.106 nvme0n3: ios=2048/2516, merge=0/0, ticks=407/391, in_queue=798, util=88.90% 00:25:00.106 nvme0n4: ios=2048/2067, merge=0/0, ticks=438/351, in_queue=789, util=89.65% 00:25:00.106 17:03:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:25:00.106 [global] 00:25:00.106 thread=1 00:25:00.106 invalidate=1 00:25:00.106 rw=randwrite 00:25:00.106 time_based=1 00:25:00.106 runtime=1 00:25:00.106 ioengine=libaio 00:25:00.106 direct=1 00:25:00.106 bs=4096 00:25:00.106 iodepth=1 00:25:00.106 norandommap=0 00:25:00.106 numjobs=1 00:25:00.106 00:25:00.106 verify_dump=1 00:25:00.106 verify_backlog=512 00:25:00.106 verify_state_save=0 00:25:00.106 do_verify=1 00:25:00.106 verify=crc32c-intel 00:25:00.106 [job0] 00:25:00.106 filename=/dev/nvme0n1 00:25:00.106 [job1] 00:25:00.106 filename=/dev/nvme0n2 00:25:00.106 [job2] 00:25:00.106 filename=/dev/nvme0n3 00:25:00.106 [job3] 00:25:00.106 filename=/dev/nvme0n4 00:25:00.106 Could not set queue depth (nvme0n1) 00:25:00.106 Could not set queue depth (nvme0n2) 00:25:00.106 Could not set queue depth (nvme0n3) 00:25:00.106 Could not set queue depth (nvme0n4) 00:25:00.106 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:00.106 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:00.106 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:00.106 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:00.106 fio-3.35 00:25:00.106 Starting 4 threads 00:25:01.487 00:25:01.487 job0: (groupid=0, jobs=1): err= 0: pid=73041: Mon Jul 22 17:03:02 2024 00:25:01.487 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:25:01.487 slat (usec): min=6, max=180, avg=10.89, stdev= 6.33 00:25:01.487 clat (usec): min=137, max=582, avg=240.82, stdev=81.60 00:25:01.487 lat (usec): min=145, max=598, avg=251.71, stdev=81.81 00:25:01.487 clat percentiles (usec): 00:25:01.487 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:25:01.487 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 225], 60.00th=[ 265], 00:25:01.487 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 367], 00:25:01.487 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 570], 00:25:01.487 | 99.99th=[ 586] 00:25:01.487 write: IOPS=2538, BW=9.92MiB/s (10.4MB/s)(9.93MiB/1001msec); 0 zone resets 00:25:01.487 slat (usec): min=7, max=118, avg=16.15, stdev= 5.95 00:25:01.487 clat (usec): min=95, max=711, avg=172.46, stdev=62.35 00:25:01.487 lat (usec): min=112, max=723, avg=188.61, stdev=62.34 00:25:01.487 clat percentiles (usec): 00:25:01.487 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 120], 00:25:01.487 | 30.00th=[ 124], 40.00th=[ 131], 50.00th=[ 143], 60.00th=[ 186], 00:25:01.487 | 70.00th=[ 200], 80.00th=[ 223], 90.00th=[ 277], 95.00th=[ 293], 00:25:01.487 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 412], 99.95th=[ 553], 00:25:01.488 | 99.99th=[ 709] 00:25:01.488 bw ( KiB/s): min=12288, max=12288, per=33.90%, avg=12288.00, stdev= 0.00, samples=1 00:25:01.488 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:01.488 lat (usec) : 100=0.04%, 250=70.28%, 500=29.07%, 750=0.61% 00:25:01.488 cpu : usr=1.90%, sys=4.60%, ctx=4639, majf=0, minf=8 00:25:01.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 issued rwts: total=2048,2541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:01.488 job1: (groupid=0, jobs=1): err= 0: pid=73042: Mon Jul 22 17:03:02 2024 00:25:01.488 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:01.488 slat (nsec): min=6525, max=37185, avg=11460.16, stdev=3499.82 00:25:01.488 clat (usec): min=201, max=1192, avg=300.06, stdev=46.91 00:25:01.488 lat (usec): min=221, max=1206, avg=311.52, stdev=46.84 00:25:01.488 clat percentiles (usec): 00:25:01.488 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:25:01.488 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 302], 00:25:01.488 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 371], 00:25:01.488 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 494], 99.95th=[ 1188], 00:25:01.488 | 99.99th=[ 1188] 00:25:01.488 write: IOPS=1957, BW=7828KiB/s (8016kB/s)(7836KiB/1001msec); 0 zone resets 00:25:01.488 slat (usec): min=8, max=525, avg=17.48, stdev=12.82 00:25:01.488 clat (usec): min=136, max=3806, avg=246.52, stdev=137.88 00:25:01.488 lat (usec): min=150, max=3825, avg=264.00, stdev=139.12 00:25:01.488 clat percentiles (usec): 00:25:01.488 | 1.00th=[ 149], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:25:01.488 | 30.00th=[ 206], 40.00th=[ 219], 50.00th=[ 235], 60.00th=[ 251], 00:25:01.488 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 330], 00:25:01.488 | 99.00th=[ 367], 99.50th=[ 474], 99.90th=[ 3359], 99.95th=[ 3818], 00:25:01.488 | 99.99th=[ 3818] 00:25:01.488 bw ( KiB/s): min= 8192, max= 8192, per=22.60%, avg=8192.00, stdev= 0.00, samples=1 00:25:01.488 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:01.488 lat (usec) : 250=35.25%, 500=64.49%, 750=0.11%, 1000=0.03% 00:25:01.488 lat (msec) : 2=0.03%, 4=0.09% 00:25:01.488 cpu : usr=1.00%, sys=4.60%, ctx=3496, majf=0, minf=13 00:25:01.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 issued rwts: total=1536,1959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:01.488 job2: (groupid=0, jobs=1): err= 0: pid=73043: Mon Jul 22 17:03:02 2024 00:25:01.488 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:01.488 slat (nsec): min=6117, max=30589, avg=9354.70, stdev=2062.39 00:25:01.488 clat (usec): min=144, max=440, avg=204.43, stdev=52.68 00:25:01.488 lat (usec): min=152, max=450, avg=213.79, stdev=52.63 00:25:01.488 clat percentiles (usec): 00:25:01.488 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:25:01.488 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:25:01.488 | 70.00th=[ 200], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[ 306], 00:25:01.488 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 437], 99.95th=[ 437], 00:25:01.488 | 99.99th=[ 441] 00:25:01.488 write: IOPS=2694, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:25:01.488 slat (usec): min=9, max=112, avg=15.37, stdev= 5.30 00:25:01.488 clat (usec): min=100, max=746, avg=150.29, stdev=34.66 00:25:01.488 lat (usec): min=114, max=760, avg=165.67, stdev=35.39 00:25:01.488 clat percentiles (usec): 00:25:01.488 | 1.00th=[ 112], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:25:01.488 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:25:01.488 | 70.00th=[ 153], 80.00th=[ 165], 90.00th=[ 208], 95.00th=[ 225], 00:25:01.488 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 314], 99.95th=[ 412], 00:25:01.488 | 99.99th=[ 750] 00:25:01.488 bw ( KiB/s): min=12288, max=12288, per=33.90%, avg=12288.00, stdev= 0.00, samples=1 00:25:01.488 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:01.488 lat (usec) : 250=89.84%, 500=10.14%, 750=0.02% 00:25:01.488 cpu : usr=1.60%, sys=5.30%, ctx=5257, majf=0, minf=5 00:25:01.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 issued rwts: total=2560,2697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:01.488 job3: (groupid=0, jobs=1): err= 0: pid=73044: Mon Jul 22 17:03:02 2024 00:25:01.488 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:01.488 slat (nsec): min=6113, max=47957, avg=10817.74, stdev=3104.94 00:25:01.488 clat (usec): min=231, max=1218, avg=308.96, stdev=46.97 00:25:01.488 lat (usec): min=239, max=1228, avg=319.78, stdev=47.43 00:25:01.488 clat percentiles (usec): 00:25:01.488 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:25:01.488 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 314], 00:25:01.488 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 383], 00:25:01.488 | 99.00th=[ 429], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 1221], 00:25:01.488 | 99.99th=[ 1221] 00:25:01.488 write: IOPS=1871, BW=7485KiB/s (7664kB/s)(7492KiB/1001msec); 0 zone resets 00:25:01.488 slat (nsec): min=8197, max=74580, avg=19505.27, stdev=7149.81 00:25:01.488 clat (usec): min=125, max=3891, avg=249.95, stdev=149.76 00:25:01.488 lat (usec): min=145, max=3922, avg=269.46, stdev=150.80 00:25:01.488 clat percentiles (usec): 00:25:01.488 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 174], 20.00th=[ 204], 00:25:01.488 | 30.00th=[ 215], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 255], 00:25:01.488 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 326], 00:25:01.488 | 99.00th=[ 404], 99.50th=[ 676], 99.90th=[ 3392], 99.95th=[ 3884], 00:25:01.488 | 99.99th=[ 3884] 00:25:01.488 bw ( KiB/s): min= 8192, max= 8192, per=22.60%, avg=8192.00, stdev= 0.00, samples=1 00:25:01.488 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:01.488 lat (usec) : 250=31.92%, 500=67.64%, 750=0.23% 00:25:01.488 lat (msec) : 2=0.12%, 4=0.09% 00:25:01.488 cpu : usr=1.00%, sys=4.80%, ctx=3458, majf=0, minf=19 00:25:01.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.488 issued rwts: total=1536,1873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:01.488 00:25:01.488 Run status group 0 (all jobs): 00:25:01.488 READ: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:25:01.488 WRITE: bw=35.4MiB/s (37.1MB/s), 7485KiB/s-10.5MiB/s (7664kB/s-11.0MB/s), io=35.4MiB (37.1MB), run=1001-1001msec 00:25:01.488 00:25:01.488 Disk stats (read/write): 00:25:01.488 nvme0n1: ios=1905/2048, merge=0/0, ticks=479/327, in_queue=806, util=86.66% 00:25:01.488 nvme0n2: ios=1334/1536, merge=0/0, ticks=401/352, in_queue=753, util=85.64% 00:25:01.488 nvme0n3: ios=2170/2560, merge=0/0, ticks=395/386, in_queue=781, util=88.87% 00:25:01.489 nvme0n4: ios=1251/1536, merge=0/0, ticks=374/371, in_queue=745, util=88.56% 00:25:01.489 17:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:25:01.489 [global] 00:25:01.489 thread=1 00:25:01.489 invalidate=1 00:25:01.489 rw=write 00:25:01.489 time_based=1 00:25:01.489 runtime=1 00:25:01.489 ioengine=libaio 00:25:01.489 direct=1 00:25:01.489 bs=4096 00:25:01.489 iodepth=128 00:25:01.489 norandommap=0 00:25:01.489 numjobs=1 00:25:01.489 00:25:01.489 verify_dump=1 00:25:01.489 verify_backlog=512 00:25:01.489 verify_state_save=0 00:25:01.489 do_verify=1 00:25:01.489 verify=crc32c-intel 00:25:01.489 [job0] 00:25:01.489 filename=/dev/nvme0n1 00:25:01.489 [job1] 00:25:01.489 filename=/dev/nvme0n2 00:25:01.489 [job2] 00:25:01.489 filename=/dev/nvme0n3 00:25:01.489 [job3] 00:25:01.489 filename=/dev/nvme0n4 00:25:01.489 Could not set queue depth (nvme0n1) 00:25:01.489 Could not set queue depth (nvme0n2) 00:25:01.489 Could not set queue depth (nvme0n3) 00:25:01.489 Could not set queue depth (nvme0n4) 00:25:01.489 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:01.489 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:01.489 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:01.489 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:01.489 fio-3.35 00:25:01.489 Starting 4 threads 00:25:02.865 00:25:02.866 job0: (groupid=0, jobs=1): err= 0: pid=73098: Mon Jul 22 17:03:04 2024 00:25:02.866 read: IOPS=4467, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1003msec) 00:25:02.866 slat (usec): min=6, max=3569, avg=106.10, stdev=500.82 00:25:02.866 clat (usec): min=452, max=16088, avg=14071.81, stdev=1371.09 00:25:02.866 lat (usec): min=3370, max=16100, avg=14177.91, stdev=1277.87 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[ 7570], 5.00th=[12518], 10.00th=[13435], 20.00th=[13698], 00:25:02.866 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:25:02.866 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:25:02.866 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16057], 99.95th=[16057], 00:25:02.866 | 99.99th=[16057] 00:25:02.866 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:25:02.866 slat (usec): min=7, max=5258, avg=105.58, stdev=461.01 00:25:02.866 clat (usec): min=9926, max=16388, avg=13795.16, stdev=899.14 00:25:02.866 lat (usec): min=11622, max=16435, avg=13900.75, stdev=778.41 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[10945], 5.00th=[12518], 10.00th=[12911], 20.00th=[13173], 00:25:02.866 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:25:02.866 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15270], 00:25:02.866 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16319], 99.95th=[16319], 00:25:02.866 | 99.99th=[16450] 00:25:02.866 bw ( KiB/s): min=18200, max=18664, per=26.20%, avg=18432.00, stdev=328.10, samples=2 00:25:02.866 iops : min= 4550, max= 4666, avg=4608.00, stdev=82.02, samples=2 00:25:02.866 lat (usec) : 500=0.01% 00:25:02.866 lat (msec) : 4=0.33%, 10=0.39%, 20=99.27% 00:25:02.866 cpu : usr=5.09%, sys=12.57%, ctx=286, majf=0, minf=13 00:25:02.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:02.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.866 issued rwts: total=4481,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.866 job1: (groupid=0, jobs=1): err= 0: pid=73099: Mon Jul 22 17:03:04 2024 00:25:02.866 read: IOPS=4531, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1003msec) 00:25:02.866 slat (usec): min=7, max=3718, avg=106.74, stdev=510.36 00:25:02.866 clat (usec): min=280, max=16213, avg=14000.91, stdev=1296.55 00:25:02.866 lat (usec): min=3663, max=16225, avg=14107.65, stdev=1195.38 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[ 7963], 5.00th=[12256], 10.00th=[13435], 20.00th=[13698], 00:25:02.866 | 30.00th=[13829], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:25:02.866 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:25:02.866 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:25:02.866 | 99.99th=[16188] 00:25:02.866 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:25:02.866 slat (usec): min=10, max=3347, avg=103.55, stdev=442.03 00:25:02.866 clat (usec): min=9986, max=15669, avg=13668.84, stdev=757.97 00:25:02.866 lat (usec): min=11262, max=15690, avg=13772.40, stdev=615.75 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[10945], 5.00th=[12518], 10.00th=[12780], 20.00th=[13173], 00:25:02.866 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13829], 60.00th=[13960], 00:25:02.866 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14484], 95.00th=[14746], 00:25:02.866 | 99.00th=[15401], 99.50th=[15664], 99.90th=[15664], 99.95th=[15664], 00:25:02.866 | 99.99th=[15664] 00:25:02.866 bw ( KiB/s): min=17240, max=19663, per=26.23%, avg=18451.50, stdev=1713.32, samples=2 00:25:02.866 iops : min= 4310, max= 4915, avg=4612.50, stdev=427.80, samples=2 00:25:02.866 lat (usec) : 500=0.01% 00:25:02.866 lat (msec) : 4=0.17%, 10=0.55%, 20=99.27% 00:25:02.866 cpu : usr=4.19%, sys=13.47%, ctx=288, majf=0, minf=11 00:25:02.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:02.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.866 issued rwts: total=4545,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.866 job2: (groupid=0, jobs=1): err= 0: pid=73100: Mon Jul 22 17:03:04 2024 00:25:02.866 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:25:02.866 slat (usec): min=5, max=4629, avg=120.08, stdev=488.16 00:25:02.866 clat (usec): min=3027, max=20260, avg=15586.44, stdev=1749.93 00:25:02.866 lat (usec): min=3045, max=20620, avg=15706.52, stdev=1788.48 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[ 8291], 5.00th=[13042], 10.00th=[14091], 20.00th=[14615], 00:25:02.866 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:25:02.866 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17433], 95.00th=[18220], 00:25:02.866 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:25:02.866 | 99.99th=[20317] 00:25:02.866 write: IOPS=4106, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1001msec); 0 zone resets 00:25:02.866 slat (usec): min=10, max=4450, avg=114.81, stdev=504.12 00:25:02.866 clat (usec): min=341, max=20216, avg=15197.97, stdev=1549.97 00:25:02.866 lat (usec): min=2415, max=20239, avg=15312.78, stdev=1610.46 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[11731], 5.00th=[13435], 10.00th=[13698], 20.00th=[14222], 00:25:02.866 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:25:02.866 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[17957], 00:25:02.866 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:25:02.866 | 99.99th=[20317] 00:25:02.866 bw ( KiB/s): min=16384, max=16416, per=23.31%, avg=16400.00, stdev=22.63, samples=2 00:25:02.866 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:25:02.866 lat (usec) : 500=0.01% 00:25:02.866 lat (msec) : 4=0.27%, 10=0.51%, 20=99.00%, 50=0.21% 00:25:02.866 cpu : usr=3.50%, sys=12.80%, ctx=395, majf=0, minf=13 00:25:02.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:02.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.866 issued rwts: total=4096,4111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.866 job3: (groupid=0, jobs=1): err= 0: pid=73101: Mon Jul 22 17:03:04 2024 00:25:02.866 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:25:02.866 slat (usec): min=5, max=4532, avg=116.83, stdev=476.56 00:25:02.866 clat (usec): min=11099, max=19897, avg=15395.74, stdev=1155.62 00:25:02.866 lat (usec): min=11119, max=19918, avg=15512.57, stdev=1217.55 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[12649], 5.00th=[13698], 10.00th=[14222], 20.00th=[14484], 00:25:02.866 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:25:02.866 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16909], 95.00th=[17695], 00:25:02.866 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19530], 00:25:02.866 | 99.99th=[19792] 00:25:02.866 write: IOPS=4304, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1002msec); 0 zone resets 00:25:02.866 slat (usec): min=8, max=4560, avg=113.35, stdev=542.57 00:25:02.866 clat (usec): min=396, max=19889, avg=14709.51, stdev=1653.90 00:25:02.866 lat (usec): min=3865, max=19925, avg=14822.86, stdev=1721.17 00:25:02.866 clat percentiles (usec): 00:25:02.866 | 1.00th=[ 9110], 5.00th=[12911], 10.00th=[13566], 20.00th=[13960], 00:25:02.866 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:25:02.866 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16450], 95.00th=[17433], 00:25:02.866 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:25:02.866 | 99.99th=[19792] 00:25:02.866 bw ( KiB/s): min=16416, max=17096, per=23.82%, avg=16756.00, stdev=480.83, samples=2 00:25:02.866 iops : min= 4104, max= 4274, avg=4189.00, stdev=120.21, samples=2 00:25:02.866 lat (usec) : 500=0.01% 00:25:02.866 lat (msec) : 4=0.05%, 10=0.88%, 20=99.06% 00:25:02.866 cpu : usr=3.60%, sys=11.69%, ctx=349, majf=0, minf=13 00:25:02.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:02.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.866 issued rwts: total=4096,4313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.866 00:25:02.866 Run status group 0 (all jobs): 00:25:02.866 READ: bw=67.1MiB/s (70.3MB/s), 16.0MiB/s-17.7MiB/s (16.7MB/s-18.6MB/s), io=67.3MiB (70.5MB), run=1001-1003msec 00:25:02.866 WRITE: bw=68.7MiB/s (72.0MB/s), 16.0MiB/s-17.9MiB/s (16.8MB/s-18.8MB/s), io=68.9MiB (72.3MB), run=1001-1003msec 00:25:02.866 00:25:02.866 Disk stats (read/write): 00:25:02.866 nvme0n1: ios=3729/4096, merge=0/0, ticks=11616/12371, in_queue=23987, util=87.15% 00:25:02.866 nvme0n2: ios=3757/4096, merge=0/0, ticks=11812/12079, in_queue=23891, util=87.55% 00:25:02.866 nvme0n3: ios=3450/3584, merge=0/0, ticks=17228/15195, in_queue=32423, util=88.98% 00:25:02.866 nvme0n4: ios=3556/3584, merge=0/0, ticks=17606/15397, in_queue=33003, util=89.54% 00:25:02.866 17:03:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:25:02.866 [global] 00:25:02.866 thread=1 00:25:02.866 invalidate=1 00:25:02.866 rw=randwrite 00:25:02.866 time_based=1 00:25:02.866 runtime=1 00:25:02.866 ioengine=libaio 00:25:02.866 direct=1 00:25:02.866 bs=4096 00:25:02.866 iodepth=128 00:25:02.866 norandommap=0 00:25:02.866 numjobs=1 00:25:02.866 00:25:02.866 verify_dump=1 00:25:02.866 verify_backlog=512 00:25:02.866 verify_state_save=0 00:25:02.866 do_verify=1 00:25:02.866 verify=crc32c-intel 00:25:02.866 [job0] 00:25:02.866 filename=/dev/nvme0n1 00:25:02.866 [job1] 00:25:02.866 filename=/dev/nvme0n2 00:25:02.866 [job2] 00:25:02.866 filename=/dev/nvme0n3 00:25:02.866 [job3] 00:25:02.866 filename=/dev/nvme0n4 00:25:02.866 Could not set queue depth (nvme0n1) 00:25:02.866 Could not set queue depth (nvme0n2) 00:25:02.867 Could not set queue depth (nvme0n3) 00:25:02.867 Could not set queue depth (nvme0n4) 00:25:02.867 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:02.867 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:02.867 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:02.867 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:02.867 fio-3.35 00:25:02.867 Starting 4 threads 00:25:04.241 00:25:04.241 job0: (groupid=0, jobs=1): err= 0: pid=73158: Mon Jul 22 17:03:05 2024 00:25:04.241 read: IOPS=2474, BW=9897KiB/s (10.1MB/s)(9956KiB/1006msec) 00:25:04.241 slat (usec): min=8, max=12418, avg=208.03, stdev=1065.27 00:25:04.241 clat (usec): min=889, max=43341, avg=25422.74, stdev=4609.84 00:25:04.241 lat (usec): min=7498, max=43363, avg=25630.77, stdev=4651.63 00:25:04.241 clat percentiles (usec): 00:25:04.241 | 1.00th=[ 8029], 5.00th=[18744], 10.00th=[19792], 20.00th=[23987], 00:25:04.241 | 30.00th=[24773], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:25:04.241 | 70.00th=[26346], 80.00th=[26608], 90.00th=[30540], 95.00th=[33424], 00:25:04.241 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:25:04.241 | 99.99th=[43254] 00:25:04.241 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:25:04.241 slat (usec): min=10, max=11500, avg=182.04, stdev=977.91 00:25:04.241 clat (usec): min=10947, max=39450, avg=24973.12, stdev=3977.36 00:25:04.241 lat (usec): min=10977, max=39472, avg=25155.16, stdev=4075.80 00:25:04.241 clat percentiles (usec): 00:25:04.241 | 1.00th=[14615], 5.00th=[16909], 10.00th=[21103], 20.00th=[22676], 00:25:04.241 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24773], 60.00th=[25822], 00:25:04.241 | 70.00th=[26608], 80.00th=[27395], 90.00th=[29230], 95.00th=[32637], 00:25:04.241 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:25:04.241 | 99.99th=[39584] 00:25:04.241 bw ( KiB/s): min= 8704, max=11776, per=18.41%, avg=10240.00, stdev=2172.23, samples=2 00:25:04.241 iops : min= 2176, max= 2944, avg=2560.00, stdev=543.06, samples=2 00:25:04.241 lat (usec) : 1000=0.02% 00:25:04.241 lat (msec) : 10=1.25%, 20=7.66%, 50=91.07% 00:25:04.241 cpu : usr=2.29%, sys=6.47%, ctx=251, majf=0, minf=13 00:25:04.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:04.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:04.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:04.241 issued rwts: total=2489,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:04.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:04.241 job1: (groupid=0, jobs=1): err= 0: pid=73159: Mon Jul 22 17:03:05 2024 00:25:04.241 read: IOPS=2429, BW=9719KiB/s (9953kB/s)(9768KiB/1005msec) 00:25:04.241 slat (usec): min=6, max=12287, avg=202.00, stdev=1037.18 00:25:04.241 clat (usec): min=1602, max=42134, avg=25324.14, stdev=4559.38 00:25:04.241 lat (usec): min=10028, max=42164, avg=25526.14, stdev=4563.76 00:25:04.241 clat percentiles (usec): 00:25:04.241 | 1.00th=[10421], 5.00th=[16909], 10.00th=[20317], 20.00th=[23987], 00:25:04.241 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:25:04.241 | 70.00th=[26346], 80.00th=[26870], 90.00th=[29230], 95.00th=[32113], 00:25:04.241 | 99.00th=[39060], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:25:04.241 | 99.99th=[42206] 00:25:04.241 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:25:04.241 slat (usec): min=11, max=13209, avg=191.08, stdev=1049.54 00:25:04.241 clat (usec): min=11972, max=42416, avg=25224.23, stdev=3622.50 00:25:04.241 lat (usec): min=11990, max=42435, avg=25415.31, stdev=3723.44 00:25:04.241 clat percentiles (usec): 00:25:04.241 | 1.00th=[16712], 5.00th=[20579], 10.00th=[22152], 20.00th=[22938], 00:25:04.241 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24773], 60.00th=[25297], 00:25:04.241 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[31851], 00:25:04.241 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:25:04.241 | 99.99th=[42206] 00:25:04.241 bw ( KiB/s): min= 8896, max=11607, per=18.43%, avg=10251.50, stdev=1916.97, samples=2 00:25:04.241 iops : min= 2224, max= 2901, avg=2562.50, stdev=478.71, samples=2 00:25:04.241 lat (msec) : 2=0.02%, 20=6.70%, 50=93.28% 00:25:04.241 cpu : usr=2.39%, sys=6.27%, ctx=248, majf=0, minf=11 00:25:04.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:04.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:04.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:04.241 issued rwts: total=2442,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:04.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:04.241 job2: (groupid=0, jobs=1): err= 0: pid=73160: Mon Jul 22 17:03:05 2024 00:25:04.241 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:25:04.241 slat (usec): min=7, max=3837, avg=113.07, stdev=560.83 00:25:04.241 clat (usec): min=10709, max=16896, avg=14966.76, stdev=842.02 00:25:04.241 lat (usec): min=12943, max=16923, avg=15079.83, stdev=636.34 00:25:04.241 clat percentiles (usec): 00:25:04.241 | 1.00th=[11600], 5.00th=[13566], 10.00th=[14091], 20.00th=[14615], 00:25:04.241 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:25:04.241 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15926], 95.00th=[16188], 00:25:04.241 | 99.00th=[16581], 99.50th=[16712], 99.90th=[16909], 99.95th=[16909], 00:25:04.241 | 99.99th=[16909] 00:25:04.241 write: IOPS=4474, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1004msec); 0 zone resets 00:25:04.241 slat (usec): min=8, max=6520, avg=112.12, stdev=512.43 00:25:04.241 clat (usec): min=2992, max=21090, avg=14554.60, stdev=1120.64 00:25:04.241 lat (usec): min=3011, max=21111, avg=14666.73, stdev=996.17 00:25:04.241 clat percentiles (usec): 00:25:04.241 | 1.00th=[10683], 5.00th=[13435], 10.00th=[13698], 20.00th=[14091], 00:25:04.241 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[14746], 00:25:04.242 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15270], 95.00th=[15533], 00:25:04.242 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21103], 99.95th=[21103], 00:25:04.242 | 99.99th=[21103] 00:25:04.242 bw ( KiB/s): min=17450, max=17504, per=31.42%, avg=17477.00, stdev=38.18, samples=2 00:25:04.242 iops : min= 4362, max= 4376, avg=4369.00, stdev= 9.90, samples=2 00:25:04.242 lat (msec) : 4=0.14%, 10=0.14%, 20=99.50%, 50=0.22% 00:25:04.242 cpu : usr=2.79%, sys=11.47%, ctx=270, majf=0, minf=11 00:25:04.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:04.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:04.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:04.242 issued rwts: total=4096,4492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:04.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:04.242 job3: (groupid=0, jobs=1): err= 0: pid=73161: Mon Jul 22 17:03:05 2024 00:25:04.242 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:25:04.242 slat (usec): min=7, max=5310, avg=114.36, stdev=476.37 00:25:04.242 clat (usec): min=11575, max=22921, avg=15170.96, stdev=1423.52 00:25:04.242 lat (usec): min=11592, max=22933, avg=15285.31, stdev=1475.81 00:25:04.242 clat percentiles (usec): 00:25:04.242 | 1.00th=[12256], 5.00th=[13566], 10.00th=[13829], 20.00th=[14222], 00:25:04.242 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:25:04.242 | 70.00th=[15270], 80.00th=[16057], 90.00th=[17171], 95.00th=[17957], 00:25:04.242 | 99.00th=[19792], 99.50th=[21627], 99.90th=[22938], 99.95th=[22938], 00:25:04.242 | 99.99th=[22938] 00:25:04.242 write: IOPS=4359, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec); 0 zone resets 00:25:04.242 slat (usec): min=9, max=5533, avg=114.67, stdev=593.80 00:25:04.242 clat (usec): min=681, max=24441, avg=14775.01, stdev=1939.52 00:25:04.242 lat (usec): min=3939, max=24467, avg=14889.68, stdev=2013.73 00:25:04.242 clat percentiles (usec): 00:25:04.242 | 1.00th=[10028], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:25:04.242 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:25:04.242 | 70.00th=[14746], 80.00th=[15139], 90.00th=[17433], 95.00th=[18744], 00:25:04.242 | 99.00th=[20317], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:25:04.242 | 99.99th=[24511] 00:25:04.242 bw ( KiB/s): min=16592, max=17434, per=30.59%, avg=17013.00, stdev=595.38, samples=2 00:25:04.242 iops : min= 4148, max= 4358, avg=4253.00, stdev=148.49, samples=2 00:25:04.242 lat (usec) : 750=0.01% 00:25:04.242 lat (msec) : 4=0.04%, 10=0.47%, 20=98.41%, 50=1.07% 00:25:04.242 cpu : usr=3.89%, sys=10.27%, ctx=303, majf=0, minf=16 00:25:04.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:04.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:04.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:04.242 issued rwts: total=4096,4377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:04.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:04.242 00:25:04.242 Run status group 0 (all jobs): 00:25:04.242 READ: bw=51.0MiB/s (53.4MB/s), 9719KiB/s-15.9MiB/s (9953kB/s-16.7MB/s), io=51.3MiB (53.8MB), run=1004-1006msec 00:25:04.242 WRITE: bw=54.3MiB/s (57.0MB/s), 9.94MiB/s-17.5MiB/s (10.4MB/s-18.3MB/s), io=54.6MiB (57.3MB), run=1004-1006msec 00:25:04.242 00:25:04.242 Disk stats (read/write): 00:25:04.242 nvme0n1: ios=2098/2207, merge=0/0, ticks=26949/23917, in_queue=50866, util=87.66% 00:25:04.242 nvme0n2: ios=2088/2199, merge=0/0, ticks=25846/24944, in_queue=50790, util=86.90% 00:25:04.242 nvme0n3: ios=3584/3712, merge=0/0, ticks=12390/12102, in_queue=24492, util=88.52% 00:25:04.242 nvme0n4: ios=3556/3584, merge=0/0, ticks=17389/15244, in_queue=32633, util=89.55% 00:25:04.242 17:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:25:04.242 17:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=73180 00:25:04.242 17:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:25:04.242 17:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:25:04.242 [global] 00:25:04.242 thread=1 00:25:04.242 invalidate=1 00:25:04.242 rw=read 00:25:04.242 time_based=1 00:25:04.242 runtime=10 00:25:04.242 ioengine=libaio 00:25:04.242 direct=1 00:25:04.242 bs=4096 00:25:04.242 iodepth=1 00:25:04.242 norandommap=1 00:25:04.242 numjobs=1 00:25:04.242 00:25:04.242 [job0] 00:25:04.242 filename=/dev/nvme0n1 00:25:04.242 [job1] 00:25:04.242 filename=/dev/nvme0n2 00:25:04.242 [job2] 00:25:04.242 filename=/dev/nvme0n3 00:25:04.242 [job3] 00:25:04.242 filename=/dev/nvme0n4 00:25:04.242 Could not set queue depth (nvme0n1) 00:25:04.242 Could not set queue depth (nvme0n2) 00:25:04.242 Could not set queue depth (nvme0n3) 00:25:04.242 Could not set queue depth (nvme0n4) 00:25:04.242 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:04.242 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:04.242 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:04.242 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:04.242 fio-3.35 00:25:04.242 Starting 4 threads 00:25:07.530 17:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:25:07.530 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=38821888, buflen=4096 00:25:07.530 fio: pid=73224, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:07.530 17:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:25:07.530 fio: pid=73223, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:07.530 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=52809728, buflen=4096 00:25:07.530 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:07.531 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:07.789 fio: pid=73221, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:07.789 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=55558144, buflen=4096 00:25:08.047 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:08.047 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:08.306 fio: pid=73222, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:08.306 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10944512, buflen=4096 00:25:08.306 00:25:08.306 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73221: Mon Jul 22 17:03:09 2024 00:25:08.306 read: IOPS=3935, BW=15.4MiB/s (16.1MB/s)(53.0MiB/3447msec) 00:25:08.306 slat (usec): min=5, max=9140, avg=12.32, stdev=143.87 00:25:08.306 clat (usec): min=126, max=5413, avg=240.80, stdev=99.60 00:25:08.306 lat (usec): min=136, max=9426, avg=253.11, stdev=175.58 00:25:08.306 clat percentiles (usec): 00:25:08.306 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:25:08.306 | 30.00th=[ 202], 40.00th=[ 227], 50.00th=[ 249], 60.00th=[ 260], 00:25:08.306 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 322], 00:25:08.306 | 99.00th=[ 367], 99.50th=[ 433], 99.90th=[ 717], 99.95th=[ 2040], 00:25:08.306 | 99.99th=[ 4293] 00:25:08.306 bw ( KiB/s): min=12856, max=20360, per=27.70%, avg=15753.33, stdev=3232.12, samples=6 00:25:08.306 iops : min= 3214, max= 5090, avg=3938.33, stdev=808.03, samples=6 00:25:08.306 lat (usec) : 250=51.63%, 500=48.06%, 750=0.20%, 1000=0.03% 00:25:08.306 lat (msec) : 2=0.01%, 4=0.04%, 10=0.01% 00:25:08.306 cpu : usr=0.93%, sys=3.80%, ctx=13572, majf=0, minf=1 00:25:08.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.306 issued rwts: total=13565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.306 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73222: Mon Jul 22 17:03:09 2024 00:25:08.306 read: IOPS=4926, BW=19.2MiB/s (20.2MB/s)(74.4MiB/3868msec) 00:25:08.306 slat (usec): min=5, max=12362, avg=11.98, stdev=135.04 00:25:08.306 clat (usec): min=121, max=199843, avg=190.02, stdev=1448.27 00:25:08.306 lat (usec): min=132, max=199864, avg=201.99, stdev=1454.82 00:25:08.306 clat percentiles (usec): 00:25:08.306 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:25:08.306 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:25:08.306 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 229], 00:25:08.306 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 898], 99.95th=[ 1713], 00:25:08.306 | 99.99th=[ 4293] 00:25:08.306 bw ( KiB/s): min=11576, max=22336, per=35.59%, avg=20241.14, stdev=3853.39, samples=7 00:25:08.306 iops : min= 2894, max= 5584, avg=5060.29, stdev=963.35, samples=7 00:25:08.306 lat (usec) : 250=97.93%, 500=1.90%, 750=0.05%, 1000=0.03% 00:25:08.306 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01%, 250=0.01% 00:25:08.306 cpu : usr=1.19%, sys=4.47%, ctx=19077, majf=0, minf=1 00:25:08.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.306 issued rwts: total=19057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.307 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73223: Mon Jul 22 17:03:09 2024 00:25:08.307 read: IOPS=4008, BW=15.7MiB/s (16.4MB/s)(50.4MiB/3217msec) 00:25:08.307 slat (usec): min=7, max=10915, avg=12.45, stdev=118.39 00:25:08.307 clat (usec): min=142, max=3698, avg=236.06, stdev=72.18 00:25:08.307 lat (usec): min=150, max=11143, avg=248.52, stdev=139.24 00:25:08.307 clat percentiles (usec): 00:25:08.307 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:25:08.307 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:25:08.307 | 70.00th=[ 231], 80.00th=[ 281], 90.00th=[ 338], 95.00th=[ 355], 00:25:08.307 | 99.00th=[ 392], 99.50th=[ 429], 99.90th=[ 578], 99.95th=[ 725], 00:25:08.307 | 99.99th=[ 2933] 00:25:08.307 bw ( KiB/s): min=11408, max=18256, per=27.99%, avg=15918.67, stdev=3077.81, samples=6 00:25:08.307 iops : min= 2852, max= 4564, avg=3979.67, stdev=769.45, samples=6 00:25:08.307 lat (usec) : 250=76.66%, 500=23.15%, 750=0.14%, 1000=0.02% 00:25:08.307 lat (msec) : 4=0.02% 00:25:08.307 cpu : usr=1.18%, sys=3.92%, ctx=12896, majf=0, minf=1 00:25:08.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.307 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.307 issued rwts: total=12894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.307 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73224: Mon Jul 22 17:03:09 2024 00:25:08.307 read: IOPS=3215, BW=12.6MiB/s (13.2MB/s)(37.0MiB/2948msec) 00:25:08.307 slat (nsec): min=5957, max=70492, avg=12080.22, stdev=4677.47 00:25:08.307 clat (usec): min=190, max=7493, avg=297.48, stdev=115.03 00:25:08.307 lat (usec): min=204, max=7522, avg=309.56, stdev=116.15 00:25:08.307 clat percentiles (usec): 00:25:08.307 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:25:08.307 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:25:08.307 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 363], 00:25:08.307 | 99.00th=[ 429], 99.50th=[ 474], 99.90th=[ 914], 99.95th=[ 3130], 00:25:08.307 | 99.99th=[ 7504] 00:25:08.307 bw ( KiB/s): min=11216, max=14040, per=22.71%, avg=12912.00, stdev=1460.07, samples=5 00:25:08.307 iops : min= 2804, max= 3510, avg=3228.00, stdev=365.02, samples=5 00:25:08.307 lat (usec) : 250=9.34%, 500=90.30%, 750=0.22%, 1000=0.04% 00:25:08.307 lat (msec) : 2=0.03%, 4=0.03%, 10=0.02% 00:25:08.307 cpu : usr=0.81%, sys=3.97%, ctx=9480, majf=0, minf=2 00:25:08.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.307 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.307 issued rwts: total=9479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.307 00:25:08.307 Run status group 0 (all jobs): 00:25:08.307 READ: bw=55.5MiB/s (58.2MB/s), 12.6MiB/s-19.2MiB/s (13.2MB/s-20.2MB/s), io=215MiB (225MB), run=2948-3868msec 00:25:08.307 00:25:08.307 Disk stats (read/write): 00:25:08.307 nvme0n1: ios=13168/0, merge=0/0, ticks=3096/0, in_queue=3096, util=95.16% 00:25:08.307 nvme0n2: ios=19032/0, merge=0/0, ticks=3602/0, in_queue=3602, util=95.88% 00:25:08.307 nvme0n3: ios=12397/0, merge=0/0, ticks=2932/0, in_queue=2932, util=96.17% 00:25:08.307 nvme0n4: ios=9217/0, merge=0/0, ticks=2683/0, in_queue=2683, util=96.45% 00:25:08.565 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:08.565 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:25:09.132 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:09.132 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:25:09.390 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:09.390 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:25:09.956 17:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:09.956 17:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:25:10.214 17:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:10.214 17:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:25:10.780 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:25:10.780 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 73180 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:10.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:10.781 nvmf hotplug test: fio failed as expected 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:25:10.781 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.347 rmmod nvme_tcp 00:25:11.347 rmmod nvme_fabrics 00:25:11.347 rmmod nvme_keyring 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 72794 ']' 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 72794 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 72794 ']' 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 72794 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72794 00:25:11.347 killing process with pid 72794 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72794' 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 72794 00:25:11.347 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 72794 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:12.723 00:25:12.723 real 0m22.394s 00:25:12.723 user 1m21.237s 00:25:12.723 sys 0m11.119s 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.723 ************************************ 00:25:12.723 END TEST nvmf_fio_target 00:25:12.723 ************************************ 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.723 17:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:12.983 ************************************ 00:25:12.983 START TEST nvmf_bdevio 00:25:12.983 ************************************ 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:12.983 * Looking for test storage... 00:25:12.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.983 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:12.984 Cannot find device "nvmf_tgt_br" 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:12.984 Cannot find device "nvmf_tgt_br2" 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:12.984 Cannot find device "nvmf_tgt_br" 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:12.984 Cannot find device "nvmf_tgt_br2" 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:25:12.984 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:13.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:25:13.242 00:25:13.242 --- 10.0.0.2 ping statistics --- 00:25:13.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.242 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:13.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:25:13.242 00:25:13.242 --- 10.0.0.3 ping statistics --- 00:25:13.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.242 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:13.242 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:13.242 00:25:13.242 --- 10.0.0.1 ping statistics --- 00:25:13.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.242 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=73515 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 73515 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 73515 ']' 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.500 17:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:13.500 [2024-07-22 17:03:14.987205] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:13.500 [2024-07-22 17:03:14.987394] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.758 [2024-07-22 17:03:15.159893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.015 [2024-07-22 17:03:15.419011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.015 [2024-07-22 17:03:15.419074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.015 [2024-07-22 17:03:15.419088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.015 [2024-07-22 17:03:15.419103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.015 [2024-07-22 17:03:15.419118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.015 [2024-07-22 17:03:15.419397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.015 [2024-07-22 17:03:15.420155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:14.015 [2024-07-22 17:03:15.420333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.015 [2024-07-22 17:03:15.420338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:14.274 [2024-07-22 17:03:15.677153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:14.274 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.274 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:25:14.274 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.274 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.274 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:14.533 [2024-07-22 17:03:15.911922] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.533 17:03:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:14.533 Malloc0 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:14.533 [2024-07-22 17:03:16.045649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:14.533 { 00:25:14.533 "params": { 00:25:14.533 "name": "Nvme$subsystem", 00:25:14.533 "trtype": "$TEST_TRANSPORT", 00:25:14.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.533 "adrfam": "ipv4", 00:25:14.533 "trsvcid": "$NVMF_PORT", 00:25:14.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.533 "hdgst": ${hdgst:-false}, 00:25:14.533 "ddgst": ${ddgst:-false} 00:25:14.533 }, 00:25:14.533 "method": "bdev_nvme_attach_controller" 00:25:14.533 } 00:25:14.533 EOF 00:25:14.533 )") 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:25:14.533 17:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:14.533 "params": { 00:25:14.533 "name": "Nvme1", 00:25:14.533 "trtype": "tcp", 00:25:14.533 "traddr": "10.0.0.2", 00:25:14.533 "adrfam": "ipv4", 00:25:14.533 "trsvcid": "4420", 00:25:14.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.533 "hdgst": false, 00:25:14.533 "ddgst": false 00:25:14.533 }, 00:25:14.533 "method": "bdev_nvme_attach_controller" 00:25:14.533 }' 00:25:14.533 [2024-07-22 17:03:16.141768] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:14.533 [2024-07-22 17:03:16.141889] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73551 ] 00:25:14.791 [2024-07-22 17:03:16.310388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:15.049 [2024-07-22 17:03:16.571948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.049 [2024-07-22 17:03:16.571979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.049 [2024-07-22 17:03:16.571979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.308 [2024-07-22 17:03:16.839325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:15.567 I/O targets: 00:25:15.567 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:15.567 00:25:15.567 00:25:15.567 CUnit - A unit testing framework for C - Version 2.1-3 00:25:15.567 http://cunit.sourceforge.net/ 00:25:15.567 00:25:15.567 00:25:15.567 Suite: bdevio tests on: Nvme1n1 00:25:15.567 Test: blockdev write read block ...passed 00:25:15.567 Test: blockdev write zeroes read block ...passed 00:25:15.567 Test: blockdev write zeroes read no split ...passed 00:25:15.567 Test: blockdev write zeroes read split ...passed 00:25:15.567 Test: blockdev write zeroes read split partial ...passed 00:25:15.567 Test: blockdev reset ...[2024-07-22 17:03:17.162903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.567 [2024-07-22 17:03:17.163058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:25:15.567 [2024-07-22 17:03:17.179158] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.567 passed 00:25:15.567 Test: blockdev write read 8 blocks ...passed 00:25:15.567 Test: blockdev write read size > 128k ...passed 00:25:15.567 Test: blockdev write read invalid size ...passed 00:25:15.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:15.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:15.567 Test: blockdev write read max offset ...passed 00:25:15.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:15.567 Test: blockdev writev readv 8 blocks ...passed 00:25:15.824 Test: blockdev writev readv 30 x 1block ...passed 00:25:15.824 Test: blockdev writev readv block ...passed 00:25:15.824 Test: blockdev writev readv size > 128k ...passed 00:25:15.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:15.824 Test: blockdev comparev and writev ...[2024-07-22 17:03:17.189855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.824 [2024-07-22 17:03:17.190088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.824 [2024-07-22 17:03:17.190188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.824 [2024-07-22 17:03:17.190297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.824 [2024-07-22 17:03:17.190730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.824 [2024-07-22 17:03:17.190867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.190952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.825 [2024-07-22 17:03:17.191020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.191468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.825 [2024-07-22 17:03:17.191562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.191654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.825 [2024-07-22 17:03:17.191760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.192216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.825 [2024-07-22 17:03:17.192340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.192426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:15.825 [2024-07-22 17:03:17.192503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.825 passed 00:25:15.825 Test: blockdev nvme passthru rw ...passed 00:25:15.825 Test: blockdev nvme passthru vendor specific ...[2024-07-22 17:03:17.193331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.825 [2024-07-22 17:03:17.193466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.193675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.825 [2024-07-22 17:03:17.193767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.193955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.825 [2024-07-22 17:03:17.194044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.825 [2024-07-22 17:03:17.194229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.825 [2024-07-22 17:03:17.194331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.825 passed 00:25:15.825 Test: blockdev nvme admin passthru ...passed 00:25:15.825 Test: blockdev copy ...passed 00:25:15.825 00:25:15.825 Run Summary: Type Total Ran Passed Failed Inactive 00:25:15.825 suites 1 1 n/a 0 0 00:25:15.825 tests 23 23 23 0 0 00:25:15.825 asserts 152 152 152 0 n/a 00:25:15.825 00:25:15.825 Elapsed time = 0.362 seconds 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:17.195 rmmod nvme_tcp 00:25:17.195 rmmod nvme_fabrics 00:25:17.195 rmmod nvme_keyring 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 73515 ']' 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 73515 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 73515 ']' 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 73515 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73515 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:25:17.195 killing process with pid 73515 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73515' 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 73515 00:25:17.195 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 73515 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:19.150 00:25:19.150 real 0m5.991s 00:25:19.150 user 0m22.637s 00:25:19.150 sys 0m1.086s 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:19.150 ************************************ 00:25:19.150 END TEST nvmf_bdevio 00:25:19.150 ************************************ 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:19.150 00:25:19.150 real 3m2.529s 00:25:19.150 user 8m7.616s 00:25:19.150 sys 0m58.960s 00:25:19.150 ************************************ 00:25:19.150 END TEST nvmf_target_core 00:25:19.150 ************************************ 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:19.150 17:03:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:19.150 17:03:20 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:25:19.150 17:03:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:19.150 17:03:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.150 17:03:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.150 ************************************ 00:25:19.150 START TEST nvmf_target_extra 00:25:19.150 ************************************ 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:25:19.150 * Looking for test storage... 00:25:19.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.150 17:03:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:19.151 ************************************ 00:25:19.151 START TEST nvmf_auth_target 00:25:19.151 ************************************ 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:25:19.151 * Looking for test storage... 00:25:19.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:19.151 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:19.152 Cannot find device "nvmf_tgt_br" 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.152 Cannot find device "nvmf_tgt_br2" 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:19.152 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:19.410 Cannot find device "nvmf_tgt_br" 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:19.410 Cannot find device "nvmf_tgt_br2" 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:19.410 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:19.411 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:19.411 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:19.411 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:19.411 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:19.411 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:19.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:25:19.669 00:25:19.669 --- 10.0.0.2 ping statistics --- 00:25:19.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.669 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:19.669 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:19.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:19.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:19.669 00:25:19.669 --- 10.0.0.3 ping statistics --- 00:25:19.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.670 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:19.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:19.670 00:25:19.670 --- 10.0.0.1 ping statistics --- 00:25:19.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.670 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=73833 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 73833 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 73833 ']' 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.670 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=73865 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cef637bb24ecbed34042cdba12490e422d3e94639f6e6dea 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.u0a 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cef637bb24ecbed34042cdba12490e422d3e94639f6e6dea 0 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cef637bb24ecbed34042cdba12490e422d3e94639f6e6dea 0 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cef637bb24ecbed34042cdba12490e422d3e94639f6e6dea 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.u0a 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.u0a 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.u0a 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.043 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=47b50593d2450a7af0497e50b2e440f6824a12c93b04cd9baadbc6be387d95c4 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XV1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 47b50593d2450a7af0497e50b2e440f6824a12c93b04cd9baadbc6be387d95c4 3 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 47b50593d2450a7af0497e50b2e440f6824a12c93b04cd9baadbc6be387d95c4 3 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=47b50593d2450a7af0497e50b2e440f6824a12c93b04cd9baadbc6be387d95c4 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XV1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XV1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.XV1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc26b09c3eea44577b00d91f9f038b8b 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HEb 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc26b09c3eea44577b00d91f9f038b8b 1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc26b09c3eea44577b00d91f9f038b8b 1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc26b09c3eea44577b00d91f9f038b8b 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HEb 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HEb 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.HEb 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5df5fd372955bc0f966e5413299cd9f45932fd30f37f3c94 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cKj 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5df5fd372955bc0f966e5413299cd9f45932fd30f37f3c94 2 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5df5fd372955bc0f966e5413299cd9f45932fd30f37f3c94 2 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5df5fd372955bc0f966e5413299cd9f45932fd30f37f3c94 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cKj 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cKj 00:25:21.044 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.cKj 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9a347048af4e2615b632330a2f3dce6c12ac45f003bee437 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZDy 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9a347048af4e2615b632330a2f3dce6c12ac45f003bee437 2 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9a347048af4e2615b632330a2f3dce6c12ac45f003bee437 2 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9a347048af4e2615b632330a2f3dce6c12ac45f003bee437 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZDy 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZDy 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ZDy 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=99997c93780d748d18fb274963d932b3 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.siE 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 99997c93780d748d18fb274963d932b3 1 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 99997c93780d748d18fb274963d932b3 1 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=99997c93780d748d18fb274963d932b3 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.siE 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.siE 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.siE 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=52ecae67b4819e1faca556adb7d7afe1fa1d58fd1a7e80421afcbe87de540e75 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lTH 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 52ecae67b4819e1faca556adb7d7afe1fa1d58fd1a7e80421afcbe87de540e75 3 00:25:21.303 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 52ecae67b4819e1faca556adb7d7afe1fa1d58fd1a7e80421afcbe87de540e75 3 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=52ecae67b4819e1faca556adb7d7afe1fa1d58fd1a7e80421afcbe87de540e75 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lTH 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lTH 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.lTH 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 73833 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 73833 ']' 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.304 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 73865 /var/tmp/host.sock 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 73865 ']' 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.562 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.496 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.496 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:22.496 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:25:22.496 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.496 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.u0a 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.u0a 00:25:22.755 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.u0a 00:25:23.014 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.XV1 ]] 00:25:23.014 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XV1 00:25:23.014 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.014 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.014 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.014 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XV1 00:25:23.014 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XV1 00:25:23.273 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:23.273 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HEb 00:25:23.273 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.273 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.273 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.273 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HEb 00:25:23.273 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HEb 00:25:23.529 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.cKj ]] 00:25:23.529 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cKj 00:25:23.529 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.529 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.529 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.529 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cKj 00:25:23.529 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cKj 00:25:23.787 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:23.787 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZDy 00:25:23.787 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.787 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.787 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.787 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZDy 00:25:23.787 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZDy 00:25:24.046 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.siE ]] 00:25:24.046 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.siE 00:25:24.046 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.046 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.046 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.046 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.siE 00:25:24.046 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.siE 00:25:24.304 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:24.304 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lTH 00:25:24.304 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.304 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.304 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.304 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lTH 00:25:24.304 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lTH 00:25:24.562 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:25:24.562 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:25:24.562 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.562 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:24.562 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:24.562 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.820 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.394 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:25.394 { 00:25:25.394 "cntlid": 1, 00:25:25.394 "qid": 0, 00:25:25.394 "state": "enabled", 00:25:25.394 "thread": "nvmf_tgt_poll_group_000", 00:25:25.394 "listen_address": { 00:25:25.394 "trtype": "TCP", 00:25:25.394 "adrfam": "IPv4", 00:25:25.394 "traddr": "10.0.0.2", 00:25:25.394 "trsvcid": "4420" 00:25:25.394 }, 00:25:25.394 "peer_address": { 00:25:25.394 "trtype": "TCP", 00:25:25.394 "adrfam": "IPv4", 00:25:25.394 "traddr": "10.0.0.1", 00:25:25.394 "trsvcid": "34258" 00:25:25.394 }, 00:25:25.394 "auth": { 00:25:25.394 "state": "completed", 00:25:25.394 "digest": "sha256", 00:25:25.394 "dhgroup": "null" 00:25:25.394 } 00:25:25.394 } 00:25:25.394 ]' 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:25.394 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:25.653 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:25.653 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:25.653 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:25.653 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:25.653 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:25.920 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:30.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:30.102 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.360 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.361 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.361 17:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.619 00:25:30.619 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:30.619 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:30.619 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:30.877 { 00:25:30.877 "cntlid": 3, 00:25:30.877 "qid": 0, 00:25:30.877 "state": "enabled", 00:25:30.877 "thread": "nvmf_tgt_poll_group_000", 00:25:30.877 "listen_address": { 00:25:30.877 "trtype": "TCP", 00:25:30.877 "adrfam": "IPv4", 00:25:30.877 "traddr": "10.0.0.2", 00:25:30.877 "trsvcid": "4420" 00:25:30.877 }, 00:25:30.877 "peer_address": { 00:25:30.877 "trtype": "TCP", 00:25:30.877 "adrfam": "IPv4", 00:25:30.877 "traddr": "10.0.0.1", 00:25:30.877 "trsvcid": "36656" 00:25:30.877 }, 00:25:30.877 "auth": { 00:25:30.877 "state": "completed", 00:25:30.877 "digest": "sha256", 00:25:30.877 "dhgroup": "null" 00:25:30.877 } 00:25:30.877 } 00:25:30.877 ]' 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:30.877 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:31.136 17:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:32.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.068 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.069 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.327 00:25:32.585 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:32.585 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:32.585 17:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:32.843 { 00:25:32.843 "cntlid": 5, 00:25:32.843 "qid": 0, 00:25:32.843 "state": "enabled", 00:25:32.843 "thread": "nvmf_tgt_poll_group_000", 00:25:32.843 "listen_address": { 00:25:32.843 "trtype": "TCP", 00:25:32.843 "adrfam": "IPv4", 00:25:32.843 "traddr": "10.0.0.2", 00:25:32.843 "trsvcid": "4420" 00:25:32.843 }, 00:25:32.843 "peer_address": { 00:25:32.843 "trtype": "TCP", 00:25:32.843 "adrfam": "IPv4", 00:25:32.843 "traddr": "10.0.0.1", 00:25:32.843 "trsvcid": "36684" 00:25:32.843 }, 00:25:32.843 "auth": { 00:25:32.843 "state": "completed", 00:25:32.843 "digest": "sha256", 00:25:32.843 "dhgroup": "null" 00:25:32.843 } 00:25:32.843 } 00:25:32.843 ]' 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:32.843 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:33.100 17:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:33.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:33.666 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:33.937 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:34.195 00:25:34.195 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:34.195 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:34.195 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:34.455 { 00:25:34.455 "cntlid": 7, 00:25:34.455 "qid": 0, 00:25:34.455 "state": "enabled", 00:25:34.455 "thread": "nvmf_tgt_poll_group_000", 00:25:34.455 "listen_address": { 00:25:34.455 "trtype": "TCP", 00:25:34.455 "adrfam": "IPv4", 00:25:34.455 "traddr": "10.0.0.2", 00:25:34.455 "trsvcid": "4420" 00:25:34.455 }, 00:25:34.455 "peer_address": { 00:25:34.455 "trtype": "TCP", 00:25:34.455 "adrfam": "IPv4", 00:25:34.455 "traddr": "10.0.0.1", 00:25:34.455 "trsvcid": "36714" 00:25:34.455 }, 00:25:34.455 "auth": { 00:25:34.455 "state": "completed", 00:25:34.455 "digest": "sha256", 00:25:34.455 "dhgroup": "null" 00:25:34.455 } 00:25:34.455 } 00:25:34.455 ]' 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:34.455 17:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:34.711 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:35.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:35.277 17:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.536 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.795 00:25:35.795 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:35.795 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:35.795 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:36.057 { 00:25:36.057 "cntlid": 9, 00:25:36.057 "qid": 0, 00:25:36.057 "state": "enabled", 00:25:36.057 "thread": "nvmf_tgt_poll_group_000", 00:25:36.057 "listen_address": { 00:25:36.057 "trtype": "TCP", 00:25:36.057 "adrfam": "IPv4", 00:25:36.057 "traddr": "10.0.0.2", 00:25:36.057 "trsvcid": "4420" 00:25:36.057 }, 00:25:36.057 "peer_address": { 00:25:36.057 "trtype": "TCP", 00:25:36.057 "adrfam": "IPv4", 00:25:36.057 "traddr": "10.0.0.1", 00:25:36.057 "trsvcid": "36740" 00:25:36.057 }, 00:25:36.057 "auth": { 00:25:36.057 "state": "completed", 00:25:36.057 "digest": "sha256", 00:25:36.057 "dhgroup": "ffdhe2048" 00:25:36.057 } 00:25:36.057 } 00:25:36.057 ]' 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:36.057 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:36.316 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:36.316 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:36.316 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:36.594 17:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:37.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.190 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.449 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.449 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.449 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.449 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.707 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:37.965 { 00:25:37.965 "cntlid": 11, 00:25:37.965 "qid": 0, 00:25:37.965 "state": "enabled", 00:25:37.965 "thread": "nvmf_tgt_poll_group_000", 00:25:37.965 "listen_address": { 00:25:37.965 "trtype": "TCP", 00:25:37.965 "adrfam": "IPv4", 00:25:37.965 "traddr": "10.0.0.2", 00:25:37.965 "trsvcid": "4420" 00:25:37.965 }, 00:25:37.965 "peer_address": { 00:25:37.965 "trtype": "TCP", 00:25:37.965 "adrfam": "IPv4", 00:25:37.965 "traddr": "10.0.0.1", 00:25:37.965 "trsvcid": "58644" 00:25:37.965 }, 00:25:37.965 "auth": { 00:25:37.965 "state": "completed", 00:25:37.965 "digest": "sha256", 00:25:37.965 "dhgroup": "ffdhe2048" 00:25:37.965 } 00:25:37.965 } 00:25:37.965 ]' 00:25:37.965 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:38.222 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:38.222 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:38.222 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:38.222 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:38.222 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:38.222 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:38.222 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:38.481 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:39.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.046 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.305 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.564 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:39.823 { 00:25:39.823 "cntlid": 13, 00:25:39.823 "qid": 0, 00:25:39.823 "state": "enabled", 00:25:39.823 "thread": "nvmf_tgt_poll_group_000", 00:25:39.823 "listen_address": { 00:25:39.823 "trtype": "TCP", 00:25:39.823 "adrfam": "IPv4", 00:25:39.823 "traddr": "10.0.0.2", 00:25:39.823 "trsvcid": "4420" 00:25:39.823 }, 00:25:39.823 "peer_address": { 00:25:39.823 "trtype": "TCP", 00:25:39.823 "adrfam": "IPv4", 00:25:39.823 "traddr": "10.0.0.1", 00:25:39.823 "trsvcid": "58674" 00:25:39.823 }, 00:25:39.823 "auth": { 00:25:39.823 "state": "completed", 00:25:39.823 "digest": "sha256", 00:25:39.823 "dhgroup": "ffdhe2048" 00:25:39.823 } 00:25:39.823 } 00:25:39.823 ]' 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:39.823 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:40.080 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:40.080 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:40.080 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:40.080 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:40.080 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:40.337 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:40.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:40.903 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:41.161 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:41.423 00:25:41.423 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:41.423 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:41.423 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:41.681 { 00:25:41.681 "cntlid": 15, 00:25:41.681 "qid": 0, 00:25:41.681 "state": "enabled", 00:25:41.681 "thread": "nvmf_tgt_poll_group_000", 00:25:41.681 "listen_address": { 00:25:41.681 "trtype": "TCP", 00:25:41.681 "adrfam": "IPv4", 00:25:41.681 "traddr": "10.0.0.2", 00:25:41.681 "trsvcid": "4420" 00:25:41.681 }, 00:25:41.681 "peer_address": { 00:25:41.681 "trtype": "TCP", 00:25:41.681 "adrfam": "IPv4", 00:25:41.681 "traddr": "10.0.0.1", 00:25:41.681 "trsvcid": "58710" 00:25:41.681 }, 00:25:41.681 "auth": { 00:25:41.681 "state": "completed", 00:25:41.681 "digest": "sha256", 00:25:41.681 "dhgroup": "ffdhe2048" 00:25:41.681 } 00:25:41.681 } 00:25:41.681 ]' 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:41.681 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:41.939 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:42.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:42.510 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.767 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.768 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.768 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.768 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.768 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.026 00:25:43.283 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:43.283 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:43.283 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:43.541 { 00:25:43.541 "cntlid": 17, 00:25:43.541 "qid": 0, 00:25:43.541 "state": "enabled", 00:25:43.541 "thread": "nvmf_tgt_poll_group_000", 00:25:43.541 "listen_address": { 00:25:43.541 "trtype": "TCP", 00:25:43.541 "adrfam": "IPv4", 00:25:43.541 "traddr": "10.0.0.2", 00:25:43.541 "trsvcid": "4420" 00:25:43.541 }, 00:25:43.541 "peer_address": { 00:25:43.541 "trtype": "TCP", 00:25:43.541 "adrfam": "IPv4", 00:25:43.541 "traddr": "10.0.0.1", 00:25:43.541 "trsvcid": "58728" 00:25:43.541 }, 00:25:43.541 "auth": { 00:25:43.541 "state": "completed", 00:25:43.541 "digest": "sha256", 00:25:43.541 "dhgroup": "ffdhe3072" 00:25:43.541 } 00:25:43.541 } 00:25:43.541 ]' 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:43.541 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:43.541 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:43.541 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:43.541 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:43.541 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:43.541 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:43.799 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:44.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:44.368 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.626 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.195 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:45.195 { 00:25:45.195 "cntlid": 19, 00:25:45.195 "qid": 0, 00:25:45.195 "state": "enabled", 00:25:45.195 "thread": "nvmf_tgt_poll_group_000", 00:25:45.195 "listen_address": { 00:25:45.195 "trtype": "TCP", 00:25:45.195 "adrfam": "IPv4", 00:25:45.195 "traddr": "10.0.0.2", 00:25:45.195 "trsvcid": "4420" 00:25:45.195 }, 00:25:45.195 "peer_address": { 00:25:45.195 "trtype": "TCP", 00:25:45.195 "adrfam": "IPv4", 00:25:45.195 "traddr": "10.0.0.1", 00:25:45.195 "trsvcid": "58772" 00:25:45.195 }, 00:25:45.195 "auth": { 00:25:45.195 "state": "completed", 00:25:45.195 "digest": "sha256", 00:25:45.195 "dhgroup": "ffdhe3072" 00:25:45.195 } 00:25:45.195 } 00:25:45.195 ]' 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:45.195 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:45.454 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:45.454 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:45.454 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:45.454 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:45.454 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:45.712 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:46.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.279 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.539 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.539 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.539 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.798 00:25:46.798 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:46.798 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:46.798 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:47.057 { 00:25:47.057 "cntlid": 21, 00:25:47.057 "qid": 0, 00:25:47.057 "state": "enabled", 00:25:47.057 "thread": "nvmf_tgt_poll_group_000", 00:25:47.057 "listen_address": { 00:25:47.057 "trtype": "TCP", 00:25:47.057 "adrfam": "IPv4", 00:25:47.057 "traddr": "10.0.0.2", 00:25:47.057 "trsvcid": "4420" 00:25:47.057 }, 00:25:47.057 "peer_address": { 00:25:47.057 "trtype": "TCP", 00:25:47.057 "adrfam": "IPv4", 00:25:47.057 "traddr": "10.0.0.1", 00:25:47.057 "trsvcid": "58806" 00:25:47.057 }, 00:25:47.057 "auth": { 00:25:47.057 "state": "completed", 00:25:47.057 "digest": "sha256", 00:25:47.057 "dhgroup": "ffdhe3072" 00:25:47.057 } 00:25:47.057 } 00:25:47.057 ]' 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:47.057 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:47.317 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:47.317 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:47.317 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:47.575 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:48.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:48.141 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:48.400 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:48.658 00:25:48.658 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:48.658 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:48.658 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:48.917 { 00:25:48.917 "cntlid": 23, 00:25:48.917 "qid": 0, 00:25:48.917 "state": "enabled", 00:25:48.917 "thread": "nvmf_tgt_poll_group_000", 00:25:48.917 "listen_address": { 00:25:48.917 "trtype": "TCP", 00:25:48.917 "adrfam": "IPv4", 00:25:48.917 "traddr": "10.0.0.2", 00:25:48.917 "trsvcid": "4420" 00:25:48.917 }, 00:25:48.917 "peer_address": { 00:25:48.917 "trtype": "TCP", 00:25:48.917 "adrfam": "IPv4", 00:25:48.917 "traddr": "10.0.0.1", 00:25:48.917 "trsvcid": "43260" 00:25:48.917 }, 00:25:48.917 "auth": { 00:25:48.917 "state": "completed", 00:25:48.917 "digest": "sha256", 00:25:48.917 "dhgroup": "ffdhe3072" 00:25:48.917 } 00:25:48.917 } 00:25:48.917 ]' 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:48.917 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:49.176 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:49.176 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:49.176 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:49.176 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:49.176 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:49.434 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:50.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:50.000 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.259 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.530 00:25:50.530 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:50.530 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:50.530 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:50.791 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.791 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:50.791 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.791 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.791 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.791 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:50.791 { 00:25:50.791 "cntlid": 25, 00:25:50.791 "qid": 0, 00:25:50.791 "state": "enabled", 00:25:50.791 "thread": "nvmf_tgt_poll_group_000", 00:25:50.791 "listen_address": { 00:25:50.791 "trtype": "TCP", 00:25:50.791 "adrfam": "IPv4", 00:25:50.791 "traddr": "10.0.0.2", 00:25:50.791 "trsvcid": "4420" 00:25:50.791 }, 00:25:50.791 "peer_address": { 00:25:50.791 "trtype": "TCP", 00:25:50.791 "adrfam": "IPv4", 00:25:50.791 "traddr": "10.0.0.1", 00:25:50.791 "trsvcid": "43290" 00:25:50.791 }, 00:25:50.791 "auth": { 00:25:50.791 "state": "completed", 00:25:50.791 "digest": "sha256", 00:25:50.791 "dhgroup": "ffdhe4096" 00:25:50.791 } 00:25:50.791 } 00:25:50.791 ]' 00:25:50.791 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:51.049 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:51.049 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:51.049 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:51.049 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:51.049 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:51.049 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:51.049 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:51.306 17:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:52.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.262 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.829 00:25:52.829 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:52.829 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:52.829 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:53.087 { 00:25:53.087 "cntlid": 27, 00:25:53.087 "qid": 0, 00:25:53.087 "state": "enabled", 00:25:53.087 "thread": "nvmf_tgt_poll_group_000", 00:25:53.087 "listen_address": { 00:25:53.087 "trtype": "TCP", 00:25:53.087 "adrfam": "IPv4", 00:25:53.087 "traddr": "10.0.0.2", 00:25:53.087 "trsvcid": "4420" 00:25:53.087 }, 00:25:53.087 "peer_address": { 00:25:53.087 "trtype": "TCP", 00:25:53.087 "adrfam": "IPv4", 00:25:53.087 "traddr": "10.0.0.1", 00:25:53.087 "trsvcid": "43326" 00:25:53.087 }, 00:25:53.087 "auth": { 00:25:53.087 "state": "completed", 00:25:53.087 "digest": "sha256", 00:25:53.087 "dhgroup": "ffdhe4096" 00:25:53.087 } 00:25:53.087 } 00:25:53.087 ]' 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:53.087 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:53.345 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:53.345 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:53.345 17:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:53.623 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:54.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.186 17:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.751 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.009 00:25:55.009 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:55.009 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:55.009 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:55.268 { 00:25:55.268 "cntlid": 29, 00:25:55.268 "qid": 0, 00:25:55.268 "state": "enabled", 00:25:55.268 "thread": "nvmf_tgt_poll_group_000", 00:25:55.268 "listen_address": { 00:25:55.268 "trtype": "TCP", 00:25:55.268 "adrfam": "IPv4", 00:25:55.268 "traddr": "10.0.0.2", 00:25:55.268 "trsvcid": "4420" 00:25:55.268 }, 00:25:55.268 "peer_address": { 00:25:55.268 "trtype": "TCP", 00:25:55.268 "adrfam": "IPv4", 00:25:55.268 "traddr": "10.0.0.1", 00:25:55.268 "trsvcid": "43356" 00:25:55.268 }, 00:25:55.268 "auth": { 00:25:55.268 "state": "completed", 00:25:55.268 "digest": "sha256", 00:25:55.268 "dhgroup": "ffdhe4096" 00:25:55.268 } 00:25:55.268 } 00:25:55.268 ]' 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:55.268 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:55.526 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:55.526 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:55.526 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:55.526 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:55.526 17:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:55.784 17:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:25:56.719 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:56.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:56.719 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:56.719 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.720 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.720 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.720 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:56.720 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:56.720 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:56.978 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:57.237 00:25:57.237 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:57.237 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:57.237 17:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:57.495 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.495 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:57.495 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:57.760 { 00:25:57.760 "cntlid": 31, 00:25:57.760 "qid": 0, 00:25:57.760 "state": "enabled", 00:25:57.760 "thread": "nvmf_tgt_poll_group_000", 00:25:57.760 "listen_address": { 00:25:57.760 "trtype": "TCP", 00:25:57.760 "adrfam": "IPv4", 00:25:57.760 "traddr": "10.0.0.2", 00:25:57.760 "trsvcid": "4420" 00:25:57.760 }, 00:25:57.760 "peer_address": { 00:25:57.760 "trtype": "TCP", 00:25:57.760 "adrfam": "IPv4", 00:25:57.760 "traddr": "10.0.0.1", 00:25:57.760 "trsvcid": "43394" 00:25:57.760 }, 00:25:57.760 "auth": { 00:25:57.760 "state": "completed", 00:25:57.760 "digest": "sha256", 00:25:57.760 "dhgroup": "ffdhe4096" 00:25:57.760 } 00:25:57.760 } 00:25:57.760 ]' 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:57.760 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:58.025 17:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:58.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:58.958 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.219 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.220 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.783 00:25:59.783 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:59.783 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:59.783 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:00.041 { 00:26:00.041 "cntlid": 33, 00:26:00.041 "qid": 0, 00:26:00.041 "state": "enabled", 00:26:00.041 "thread": "nvmf_tgt_poll_group_000", 00:26:00.041 "listen_address": { 00:26:00.041 "trtype": "TCP", 00:26:00.041 "adrfam": "IPv4", 00:26:00.041 "traddr": "10.0.0.2", 00:26:00.041 "trsvcid": "4420" 00:26:00.041 }, 00:26:00.041 "peer_address": { 00:26:00.041 "trtype": "TCP", 00:26:00.041 "adrfam": "IPv4", 00:26:00.041 "traddr": "10.0.0.1", 00:26:00.041 "trsvcid": "55460" 00:26:00.041 }, 00:26:00.041 "auth": { 00:26:00.041 "state": "completed", 00:26:00.041 "digest": "sha256", 00:26:00.041 "dhgroup": "ffdhe6144" 00:26:00.041 } 00:26:00.041 } 00:26:00.041 ]' 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:00.041 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:00.311 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:26:00.927 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:01.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:01.191 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:01.191 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.191 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.191 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.191 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:01.191 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:01.191 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.450 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.017 00:26:02.017 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:02.017 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:02.017 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:02.276 { 00:26:02.276 "cntlid": 35, 00:26:02.276 "qid": 0, 00:26:02.276 "state": "enabled", 00:26:02.276 "thread": "nvmf_tgt_poll_group_000", 00:26:02.276 "listen_address": { 00:26:02.276 "trtype": "TCP", 00:26:02.276 "adrfam": "IPv4", 00:26:02.276 "traddr": "10.0.0.2", 00:26:02.276 "trsvcid": "4420" 00:26:02.276 }, 00:26:02.276 "peer_address": { 00:26:02.276 "trtype": "TCP", 00:26:02.276 "adrfam": "IPv4", 00:26:02.276 "traddr": "10.0.0.1", 00:26:02.276 "trsvcid": "55482" 00:26:02.276 }, 00:26:02.276 "auth": { 00:26:02.276 "state": "completed", 00:26:02.276 "digest": "sha256", 00:26:02.276 "dhgroup": "ffdhe6144" 00:26:02.276 } 00:26:02.276 } 00:26:02.276 ]' 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:02.276 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:02.535 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:03.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.511 17:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.769 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.335 00:26:04.335 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:04.335 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:04.335 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:04.593 { 00:26:04.593 "cntlid": 37, 00:26:04.593 "qid": 0, 00:26:04.593 "state": "enabled", 00:26:04.593 "thread": "nvmf_tgt_poll_group_000", 00:26:04.593 "listen_address": { 00:26:04.593 "trtype": "TCP", 00:26:04.593 "adrfam": "IPv4", 00:26:04.593 "traddr": "10.0.0.2", 00:26:04.593 "trsvcid": "4420" 00:26:04.593 }, 00:26:04.593 "peer_address": { 00:26:04.593 "trtype": "TCP", 00:26:04.593 "adrfam": "IPv4", 00:26:04.593 "traddr": "10.0.0.1", 00:26:04.593 "trsvcid": "55504" 00:26:04.593 }, 00:26:04.593 "auth": { 00:26:04.593 "state": "completed", 00:26:04.593 "digest": "sha256", 00:26:04.593 "dhgroup": "ffdhe6144" 00:26:04.593 } 00:26:04.593 } 00:26:04.593 ]' 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:04.593 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:04.852 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:26:05.418 17:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:05.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:05.418 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:05.418 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.418 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.418 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:05.418 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.419 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:26:05.676 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.677 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.935 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.935 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:05.935 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:06.194 00:26:06.194 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:06.194 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:06.194 17:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:06.761 { 00:26:06.761 "cntlid": 39, 00:26:06.761 "qid": 0, 00:26:06.761 "state": "enabled", 00:26:06.761 "thread": "nvmf_tgt_poll_group_000", 00:26:06.761 "listen_address": { 00:26:06.761 "trtype": "TCP", 00:26:06.761 "adrfam": "IPv4", 00:26:06.761 "traddr": "10.0.0.2", 00:26:06.761 "trsvcid": "4420" 00:26:06.761 }, 00:26:06.761 "peer_address": { 00:26:06.761 "trtype": "TCP", 00:26:06.761 "adrfam": "IPv4", 00:26:06.761 "traddr": "10.0.0.1", 00:26:06.761 "trsvcid": "55532" 00:26:06.761 }, 00:26:06.761 "auth": { 00:26:06.761 "state": "completed", 00:26:06.761 "digest": "sha256", 00:26:06.761 "dhgroup": "ffdhe6144" 00:26:06.761 } 00:26:06.761 } 00:26:06.761 ]' 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:06.761 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:07.021 17:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:07.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.627 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.886 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.454 00:26:08.454 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:08.454 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:08.454 17:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:08.711 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:08.712 { 00:26:08.712 "cntlid": 41, 00:26:08.712 "qid": 0, 00:26:08.712 "state": "enabled", 00:26:08.712 "thread": "nvmf_tgt_poll_group_000", 00:26:08.712 "listen_address": { 00:26:08.712 "trtype": "TCP", 00:26:08.712 "adrfam": "IPv4", 00:26:08.712 "traddr": "10.0.0.2", 00:26:08.712 "trsvcid": "4420" 00:26:08.712 }, 00:26:08.712 "peer_address": { 00:26:08.712 "trtype": "TCP", 00:26:08.712 "adrfam": "IPv4", 00:26:08.712 "traddr": "10.0.0.1", 00:26:08.712 "trsvcid": "32776" 00:26:08.712 }, 00:26:08.712 "auth": { 00:26:08.712 "state": "completed", 00:26:08.712 "digest": "sha256", 00:26:08.712 "dhgroup": "ffdhe8192" 00:26:08.712 } 00:26:08.712 } 00:26:08.712 ]' 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:08.712 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:08.970 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:08.970 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:08.970 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:08.970 17:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:09.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.903 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.836 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:10.836 { 00:26:10.836 "cntlid": 43, 00:26:10.836 "qid": 0, 00:26:10.836 "state": "enabled", 00:26:10.836 "thread": "nvmf_tgt_poll_group_000", 00:26:10.836 "listen_address": { 00:26:10.836 "trtype": "TCP", 00:26:10.836 "adrfam": "IPv4", 00:26:10.836 "traddr": "10.0.0.2", 00:26:10.836 "trsvcid": "4420" 00:26:10.836 }, 00:26:10.836 "peer_address": { 00:26:10.836 "trtype": "TCP", 00:26:10.836 "adrfam": "IPv4", 00:26:10.836 "traddr": "10.0.0.1", 00:26:10.836 "trsvcid": "32810" 00:26:10.836 }, 00:26:10.836 "auth": { 00:26:10.836 "state": "completed", 00:26:10.836 "digest": "sha256", 00:26:10.836 "dhgroup": "ffdhe8192" 00:26:10.836 } 00:26:10.836 } 00:26:10.836 ]' 00:26:10.836 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:11.094 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:11.094 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:11.094 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:11.094 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:11.094 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:11.094 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:11.094 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:11.352 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:12.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.286 17:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.853 00:26:12.853 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:12.853 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:12.853 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:13.113 { 00:26:13.113 "cntlid": 45, 00:26:13.113 "qid": 0, 00:26:13.113 "state": "enabled", 00:26:13.113 "thread": "nvmf_tgt_poll_group_000", 00:26:13.113 "listen_address": { 00:26:13.113 "trtype": "TCP", 00:26:13.113 "adrfam": "IPv4", 00:26:13.113 "traddr": "10.0.0.2", 00:26:13.113 "trsvcid": "4420" 00:26:13.113 }, 00:26:13.113 "peer_address": { 00:26:13.113 "trtype": "TCP", 00:26:13.113 "adrfam": "IPv4", 00:26:13.113 "traddr": "10.0.0.1", 00:26:13.113 "trsvcid": "32832" 00:26:13.113 }, 00:26:13.113 "auth": { 00:26:13.113 "state": "completed", 00:26:13.113 "digest": "sha256", 00:26:13.113 "dhgroup": "ffdhe8192" 00:26:13.113 } 00:26:13.113 } 00:26:13.113 ]' 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:13.113 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:13.373 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:13.373 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:13.373 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:13.373 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:13.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.940 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:14.199 17:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:14.817 00:26:14.817 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:14.817 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:14.817 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:15.385 { 00:26:15.385 "cntlid": 47, 00:26:15.385 "qid": 0, 00:26:15.385 "state": "enabled", 00:26:15.385 "thread": "nvmf_tgt_poll_group_000", 00:26:15.385 "listen_address": { 00:26:15.385 "trtype": "TCP", 00:26:15.385 "adrfam": "IPv4", 00:26:15.385 "traddr": "10.0.0.2", 00:26:15.385 "trsvcid": "4420" 00:26:15.385 }, 00:26:15.385 "peer_address": { 00:26:15.385 "trtype": "TCP", 00:26:15.385 "adrfam": "IPv4", 00:26:15.385 "traddr": "10.0.0.1", 00:26:15.385 "trsvcid": "32858" 00:26:15.385 }, 00:26:15.385 "auth": { 00:26:15.385 "state": "completed", 00:26:15.385 "digest": "sha256", 00:26:15.385 "dhgroup": "ffdhe8192" 00:26:15.385 } 00:26:15.385 } 00:26:15.385 ]' 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:15.385 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:15.643 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:16.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:16.578 17:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.836 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.094 00:26:17.094 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:17.094 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:17.094 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:17.353 { 00:26:17.353 "cntlid": 49, 00:26:17.353 "qid": 0, 00:26:17.353 "state": "enabled", 00:26:17.353 "thread": "nvmf_tgt_poll_group_000", 00:26:17.353 "listen_address": { 00:26:17.353 "trtype": "TCP", 00:26:17.353 "adrfam": "IPv4", 00:26:17.353 "traddr": "10.0.0.2", 00:26:17.353 "trsvcid": "4420" 00:26:17.353 }, 00:26:17.353 "peer_address": { 00:26:17.353 "trtype": "TCP", 00:26:17.353 "adrfam": "IPv4", 00:26:17.353 "traddr": "10.0.0.1", 00:26:17.353 "trsvcid": "32882" 00:26:17.353 }, 00:26:17.353 "auth": { 00:26:17.353 "state": "completed", 00:26:17.353 "digest": "sha384", 00:26:17.353 "dhgroup": "null" 00:26:17.353 } 00:26:17.353 } 00:26:17.353 ]' 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:17.353 17:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:17.609 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:18.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:18.540 17:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.867 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.126 00:26:19.126 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:19.126 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:19.126 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:19.383 { 00:26:19.383 "cntlid": 51, 00:26:19.383 "qid": 0, 00:26:19.383 "state": "enabled", 00:26:19.383 "thread": "nvmf_tgt_poll_group_000", 00:26:19.383 "listen_address": { 00:26:19.383 "trtype": "TCP", 00:26:19.383 "adrfam": "IPv4", 00:26:19.383 "traddr": "10.0.0.2", 00:26:19.383 "trsvcid": "4420" 00:26:19.383 }, 00:26:19.383 "peer_address": { 00:26:19.383 "trtype": "TCP", 00:26:19.383 "adrfam": "IPv4", 00:26:19.383 "traddr": "10.0.0.1", 00:26:19.383 "trsvcid": "48034" 00:26:19.383 }, 00:26:19.383 "auth": { 00:26:19.383 "state": "completed", 00:26:19.383 "digest": "sha384", 00:26:19.383 "dhgroup": "null" 00:26:19.383 } 00:26:19.383 } 00:26:19.383 ]' 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:19.383 17:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:19.639 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:26:20.572 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:20.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:20.572 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:20.572 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.572 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.572 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.573 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:20.573 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:20.573 17:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.573 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.831 00:26:20.831 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:20.831 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:21.089 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:21.348 { 00:26:21.348 "cntlid": 53, 00:26:21.348 "qid": 0, 00:26:21.348 "state": "enabled", 00:26:21.348 "thread": "nvmf_tgt_poll_group_000", 00:26:21.348 "listen_address": { 00:26:21.348 "trtype": "TCP", 00:26:21.348 "adrfam": "IPv4", 00:26:21.348 "traddr": "10.0.0.2", 00:26:21.348 "trsvcid": "4420" 00:26:21.348 }, 00:26:21.348 "peer_address": { 00:26:21.348 "trtype": "TCP", 00:26:21.348 "adrfam": "IPv4", 00:26:21.348 "traddr": "10.0.0.1", 00:26:21.348 "trsvcid": "48058" 00:26:21.348 }, 00:26:21.348 "auth": { 00:26:21.348 "state": "completed", 00:26:21.348 "digest": "sha384", 00:26:21.348 "dhgroup": "null" 00:26:21.348 } 00:26:21.348 } 00:26:21.348 ]' 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:21.348 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:21.606 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:26:22.563 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:22.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:22.564 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:22.564 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.564 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.564 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.564 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:22.564 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:22.564 17:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:22.822 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:23.080 00:26:23.080 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:23.080 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:23.080 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:23.339 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.339 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:23.339 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.339 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.339 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.339 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:23.339 { 00:26:23.339 "cntlid": 55, 00:26:23.339 "qid": 0, 00:26:23.339 "state": "enabled", 00:26:23.339 "thread": "nvmf_tgt_poll_group_000", 00:26:23.339 "listen_address": { 00:26:23.339 "trtype": "TCP", 00:26:23.339 "adrfam": "IPv4", 00:26:23.339 "traddr": "10.0.0.2", 00:26:23.339 "trsvcid": "4420" 00:26:23.339 }, 00:26:23.339 "peer_address": { 00:26:23.339 "trtype": "TCP", 00:26:23.339 "adrfam": "IPv4", 00:26:23.339 "traddr": "10.0.0.1", 00:26:23.339 "trsvcid": "48104" 00:26:23.339 }, 00:26:23.339 "auth": { 00:26:23.339 "state": "completed", 00:26:23.339 "digest": "sha384", 00:26:23.339 "dhgroup": "null" 00:26:23.339 } 00:26:23.339 } 00:26:23.339 ]' 00:26:23.339 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:23.632 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:23.632 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:23.632 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:23.632 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:23.632 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:23.632 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:23.632 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:23.890 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:24.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.821 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.388 00:26:25.388 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:25.388 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:25.388 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:25.646 { 00:26:25.646 "cntlid": 57, 00:26:25.646 "qid": 0, 00:26:25.646 "state": "enabled", 00:26:25.646 "thread": "nvmf_tgt_poll_group_000", 00:26:25.646 "listen_address": { 00:26:25.646 "trtype": "TCP", 00:26:25.646 "adrfam": "IPv4", 00:26:25.646 "traddr": "10.0.0.2", 00:26:25.646 "trsvcid": "4420" 00:26:25.646 }, 00:26:25.646 "peer_address": { 00:26:25.646 "trtype": "TCP", 00:26:25.646 "adrfam": "IPv4", 00:26:25.646 "traddr": "10.0.0.1", 00:26:25.646 "trsvcid": "48138" 00:26:25.646 }, 00:26:25.646 "auth": { 00:26:25.646 "state": "completed", 00:26:25.646 "digest": "sha384", 00:26:25.646 "dhgroup": "ffdhe2048" 00:26:25.646 } 00:26:25.646 } 00:26:25.646 ]' 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:25.646 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:26.215 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:26.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:26.783 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.041 17:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.607 00:26:27.607 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:27.607 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:27.607 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:27.865 { 00:26:27.865 "cntlid": 59, 00:26:27.865 "qid": 0, 00:26:27.865 "state": "enabled", 00:26:27.865 "thread": "nvmf_tgt_poll_group_000", 00:26:27.865 "listen_address": { 00:26:27.865 "trtype": "TCP", 00:26:27.865 "adrfam": "IPv4", 00:26:27.865 "traddr": "10.0.0.2", 00:26:27.865 "trsvcid": "4420" 00:26:27.865 }, 00:26:27.865 "peer_address": { 00:26:27.865 "trtype": "TCP", 00:26:27.865 "adrfam": "IPv4", 00:26:27.865 "traddr": "10.0.0.1", 00:26:27.865 "trsvcid": "48164" 00:26:27.865 }, 00:26:27.865 "auth": { 00:26:27.865 "state": "completed", 00:26:27.865 "digest": "sha384", 00:26:27.865 "dhgroup": "ffdhe2048" 00:26:27.865 } 00:26:27.865 } 00:26:27.865 ]' 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:27.865 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:28.124 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:28.124 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:28.124 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:28.124 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:28.124 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:28.382 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:29.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:29.316 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.574 17:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.833 00:26:29.833 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:29.833 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:29.833 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:30.091 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.091 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:30.091 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.091 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:30.350 { 00:26:30.350 "cntlid": 61, 00:26:30.350 "qid": 0, 00:26:30.350 "state": "enabled", 00:26:30.350 "thread": "nvmf_tgt_poll_group_000", 00:26:30.350 "listen_address": { 00:26:30.350 "trtype": "TCP", 00:26:30.350 "adrfam": "IPv4", 00:26:30.350 "traddr": "10.0.0.2", 00:26:30.350 "trsvcid": "4420" 00:26:30.350 }, 00:26:30.350 "peer_address": { 00:26:30.350 "trtype": "TCP", 00:26:30.350 "adrfam": "IPv4", 00:26:30.350 "traddr": "10.0.0.1", 00:26:30.350 "trsvcid": "40020" 00:26:30.350 }, 00:26:30.350 "auth": { 00:26:30.350 "state": "completed", 00:26:30.350 "digest": "sha384", 00:26:30.350 "dhgroup": "ffdhe2048" 00:26:30.350 } 00:26:30.350 } 00:26:30.350 ]' 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:30.350 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:30.608 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:31.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:31.548 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:31.825 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:32.085 00:26:32.085 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:32.085 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:32.085 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:32.343 { 00:26:32.343 "cntlid": 63, 00:26:32.343 "qid": 0, 00:26:32.343 "state": "enabled", 00:26:32.343 "thread": "nvmf_tgt_poll_group_000", 00:26:32.343 "listen_address": { 00:26:32.343 "trtype": "TCP", 00:26:32.343 "adrfam": "IPv4", 00:26:32.343 "traddr": "10.0.0.2", 00:26:32.343 "trsvcid": "4420" 00:26:32.343 }, 00:26:32.343 "peer_address": { 00:26:32.343 "trtype": "TCP", 00:26:32.343 "adrfam": "IPv4", 00:26:32.343 "traddr": "10.0.0.1", 00:26:32.343 "trsvcid": "40036" 00:26:32.343 }, 00:26:32.343 "auth": { 00:26:32.343 "state": "completed", 00:26:32.343 "digest": "sha384", 00:26:32.343 "dhgroup": "ffdhe2048" 00:26:32.343 } 00:26:32.343 } 00:26:32.343 ]' 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:32.343 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:32.602 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:32.602 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:32.602 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:32.861 17:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:26:33.427 17:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:33.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:33.427 17:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:33.427 17:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.427 17:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.427 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.427 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.427 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:33.427 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:33.427 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.991 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.248 00:26:34.248 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:34.248 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:34.248 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:34.815 { 00:26:34.815 "cntlid": 65, 00:26:34.815 "qid": 0, 00:26:34.815 "state": "enabled", 00:26:34.815 "thread": "nvmf_tgt_poll_group_000", 00:26:34.815 "listen_address": { 00:26:34.815 "trtype": "TCP", 00:26:34.815 "adrfam": "IPv4", 00:26:34.815 "traddr": "10.0.0.2", 00:26:34.815 "trsvcid": "4420" 00:26:34.815 }, 00:26:34.815 "peer_address": { 00:26:34.815 "trtype": "TCP", 00:26:34.815 "adrfam": "IPv4", 00:26:34.815 "traddr": "10.0.0.1", 00:26:34.815 "trsvcid": "40070" 00:26:34.815 }, 00:26:34.815 "auth": { 00:26:34.815 "state": "completed", 00:26:34.815 "digest": "sha384", 00:26:34.815 "dhgroup": "ffdhe3072" 00:26:34.815 } 00:26:34.815 } 00:26:34.815 ]' 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:34.815 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:34.816 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:34.816 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:35.074 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:35.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:35.640 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.899 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.158 00:26:36.158 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:36.158 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:36.158 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:36.416 { 00:26:36.416 "cntlid": 67, 00:26:36.416 "qid": 0, 00:26:36.416 "state": "enabled", 00:26:36.416 "thread": "nvmf_tgt_poll_group_000", 00:26:36.416 "listen_address": { 00:26:36.416 "trtype": "TCP", 00:26:36.416 "adrfam": "IPv4", 00:26:36.416 "traddr": "10.0.0.2", 00:26:36.416 "trsvcid": "4420" 00:26:36.416 }, 00:26:36.416 "peer_address": { 00:26:36.416 "trtype": "TCP", 00:26:36.416 "adrfam": "IPv4", 00:26:36.416 "traddr": "10.0.0.1", 00:26:36.416 "trsvcid": "40104" 00:26:36.416 }, 00:26:36.416 "auth": { 00:26:36.416 "state": "completed", 00:26:36.416 "digest": "sha384", 00:26:36.416 "dhgroup": "ffdhe3072" 00:26:36.416 } 00:26:36.416 } 00:26:36.416 ]' 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:36.416 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:36.416 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:36.416 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:36.675 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:36.675 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:36.675 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:36.675 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:26:37.241 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:37.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:37.500 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:37.500 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.500 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.500 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.500 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:37.500 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.500 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.500 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.069 00:26:38.069 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:38.069 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:38.069 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:38.327 { 00:26:38.327 "cntlid": 69, 00:26:38.327 "qid": 0, 00:26:38.327 "state": "enabled", 00:26:38.327 "thread": "nvmf_tgt_poll_group_000", 00:26:38.327 "listen_address": { 00:26:38.327 "trtype": "TCP", 00:26:38.327 "adrfam": "IPv4", 00:26:38.327 "traddr": "10.0.0.2", 00:26:38.327 "trsvcid": "4420" 00:26:38.327 }, 00:26:38.327 "peer_address": { 00:26:38.327 "trtype": "TCP", 00:26:38.327 "adrfam": "IPv4", 00:26:38.327 "traddr": "10.0.0.1", 00:26:38.327 "trsvcid": "54478" 00:26:38.327 }, 00:26:38.327 "auth": { 00:26:38.327 "state": "completed", 00:26:38.327 "digest": "sha384", 00:26:38.327 "dhgroup": "ffdhe3072" 00:26:38.327 } 00:26:38.327 } 00:26:38.327 ]' 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:38.327 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:38.328 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:38.328 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:38.328 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:38.328 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:38.328 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:38.586 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:26:39.523 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:39.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:39.524 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:39.524 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.524 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.524 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.524 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:39.524 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:39.524 17:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:39.524 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:39.786 00:26:40.044 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:40.044 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:40.044 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:40.044 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.044 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:40.044 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.044 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:40.303 { 00:26:40.303 "cntlid": 71, 00:26:40.303 "qid": 0, 00:26:40.303 "state": "enabled", 00:26:40.303 "thread": "nvmf_tgt_poll_group_000", 00:26:40.303 "listen_address": { 00:26:40.303 "trtype": "TCP", 00:26:40.303 "adrfam": "IPv4", 00:26:40.303 "traddr": "10.0.0.2", 00:26:40.303 "trsvcid": "4420" 00:26:40.303 }, 00:26:40.303 "peer_address": { 00:26:40.303 "trtype": "TCP", 00:26:40.303 "adrfam": "IPv4", 00:26:40.303 "traddr": "10.0.0.1", 00:26:40.303 "trsvcid": "54518" 00:26:40.303 }, 00:26:40.303 "auth": { 00:26:40.303 "state": "completed", 00:26:40.303 "digest": "sha384", 00:26:40.303 "dhgroup": "ffdhe3072" 00:26:40.303 } 00:26:40.303 } 00:26:40.303 ]' 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:40.303 17:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:40.562 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:41.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:41.498 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.757 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.015 00:26:42.015 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:42.015 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:42.015 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:42.273 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.273 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:42.273 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.273 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:42.273 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.273 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:42.273 { 00:26:42.273 "cntlid": 73, 00:26:42.273 "qid": 0, 00:26:42.273 "state": "enabled", 00:26:42.273 "thread": "nvmf_tgt_poll_group_000", 00:26:42.273 "listen_address": { 00:26:42.273 "trtype": "TCP", 00:26:42.273 "adrfam": "IPv4", 00:26:42.273 "traddr": "10.0.0.2", 00:26:42.273 "trsvcid": "4420" 00:26:42.273 }, 00:26:42.273 "peer_address": { 00:26:42.273 "trtype": "TCP", 00:26:42.273 "adrfam": "IPv4", 00:26:42.273 "traddr": "10.0.0.1", 00:26:42.273 "trsvcid": "54544" 00:26:42.273 }, 00:26:42.273 "auth": { 00:26:42.273 "state": "completed", 00:26:42.273 "digest": "sha384", 00:26:42.273 "dhgroup": "ffdhe4096" 00:26:42.273 } 00:26:42.273 } 00:26:42.273 ]' 00:26:42.273 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:42.532 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:42.532 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:42.532 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:42.532 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:42.532 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:42.532 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:42.532 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:42.834 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:43.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.802 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.061 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.061 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.061 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.320 00:26:44.320 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:44.320 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:44.320 17:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:44.579 { 00:26:44.579 "cntlid": 75, 00:26:44.579 "qid": 0, 00:26:44.579 "state": "enabled", 00:26:44.579 "thread": "nvmf_tgt_poll_group_000", 00:26:44.579 "listen_address": { 00:26:44.579 "trtype": "TCP", 00:26:44.579 "adrfam": "IPv4", 00:26:44.579 "traddr": "10.0.0.2", 00:26:44.579 "trsvcid": "4420" 00:26:44.579 }, 00:26:44.579 "peer_address": { 00:26:44.579 "trtype": "TCP", 00:26:44.579 "adrfam": "IPv4", 00:26:44.579 "traddr": "10.0.0.1", 00:26:44.579 "trsvcid": "54586" 00:26:44.579 }, 00:26:44.579 "auth": { 00:26:44.579 "state": "completed", 00:26:44.579 "digest": "sha384", 00:26:44.579 "dhgroup": "ffdhe4096" 00:26:44.579 } 00:26:44.579 } 00:26:44.579 ]' 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:44.579 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:44.837 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:44.837 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:44.837 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:44.837 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:44.837 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:45.095 17:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:46.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:46.031 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:46.290 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:26:46.290 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:46.290 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:46.290 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:46.290 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:46.290 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:46.290 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.291 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.291 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:46.291 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.291 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.291 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.549 00:26:46.549 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:46.549 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:46.549 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:46.808 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.809 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:46.809 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.809 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:46.809 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.809 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:46.809 { 00:26:46.809 "cntlid": 77, 00:26:46.809 "qid": 0, 00:26:46.809 "state": "enabled", 00:26:46.809 "thread": "nvmf_tgt_poll_group_000", 00:26:46.809 "listen_address": { 00:26:46.809 "trtype": "TCP", 00:26:46.809 "adrfam": "IPv4", 00:26:46.809 "traddr": "10.0.0.2", 00:26:46.809 "trsvcid": "4420" 00:26:46.809 }, 00:26:46.809 "peer_address": { 00:26:46.809 "trtype": "TCP", 00:26:46.809 "adrfam": "IPv4", 00:26:46.809 "traddr": "10.0.0.1", 00:26:46.809 "trsvcid": "54618" 00:26:46.809 }, 00:26:46.809 "auth": { 00:26:46.809 "state": "completed", 00:26:46.809 "digest": "sha384", 00:26:46.809 "dhgroup": "ffdhe4096" 00:26:46.809 } 00:26:46.809 } 00:26:46.809 ]' 00:26:46.809 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:47.067 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:47.067 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:47.067 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:47.067 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:47.067 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:47.067 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:47.067 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:47.325 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:48.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.259 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:48.518 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:49.084 00:26:49.084 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:49.084 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:49.084 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:49.341 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.341 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:49.341 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:49.342 { 00:26:49.342 "cntlid": 79, 00:26:49.342 "qid": 0, 00:26:49.342 "state": "enabled", 00:26:49.342 "thread": "nvmf_tgt_poll_group_000", 00:26:49.342 "listen_address": { 00:26:49.342 "trtype": "TCP", 00:26:49.342 "adrfam": "IPv4", 00:26:49.342 "traddr": "10.0.0.2", 00:26:49.342 "trsvcid": "4420" 00:26:49.342 }, 00:26:49.342 "peer_address": { 00:26:49.342 "trtype": "TCP", 00:26:49.342 "adrfam": "IPv4", 00:26:49.342 "traddr": "10.0.0.1", 00:26:49.342 "trsvcid": "43292" 00:26:49.342 }, 00:26:49.342 "auth": { 00:26:49.342 "state": "completed", 00:26:49.342 "digest": "sha384", 00:26:49.342 "dhgroup": "ffdhe4096" 00:26:49.342 } 00:26:49.342 } 00:26:49.342 ]' 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:49.342 17:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:49.948 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:50.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.513 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.772 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.337 00:26:51.337 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:51.337 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:51.337 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:51.596 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.596 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:51.596 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.596 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:51.596 { 00:26:51.596 "cntlid": 81, 00:26:51.596 "qid": 0, 00:26:51.596 "state": "enabled", 00:26:51.596 "thread": "nvmf_tgt_poll_group_000", 00:26:51.596 "listen_address": { 00:26:51.596 "trtype": "TCP", 00:26:51.596 "adrfam": "IPv4", 00:26:51.596 "traddr": "10.0.0.2", 00:26:51.596 "trsvcid": "4420" 00:26:51.596 }, 00:26:51.596 "peer_address": { 00:26:51.596 "trtype": "TCP", 00:26:51.596 "adrfam": "IPv4", 00:26:51.596 "traddr": "10.0.0.1", 00:26:51.596 "trsvcid": "43316" 00:26:51.596 }, 00:26:51.596 "auth": { 00:26:51.596 "state": "completed", 00:26:51.596 "digest": "sha384", 00:26:51.596 "dhgroup": "ffdhe6144" 00:26:51.596 } 00:26:51.596 } 00:26:51.596 ]' 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:51.596 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:51.854 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:26:52.421 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:52.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:52.421 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:52.421 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.421 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:52.421 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.421 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:52.421 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:52.421 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.986 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.288 00:26:53.288 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:53.288 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:53.288 17:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:53.547 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.547 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:53.547 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.547 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:53.547 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.547 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:53.547 { 00:26:53.547 "cntlid": 83, 00:26:53.547 "qid": 0, 00:26:53.547 "state": "enabled", 00:26:53.547 "thread": "nvmf_tgt_poll_group_000", 00:26:53.547 "listen_address": { 00:26:53.547 "trtype": "TCP", 00:26:53.547 "adrfam": "IPv4", 00:26:53.547 "traddr": "10.0.0.2", 00:26:53.547 "trsvcid": "4420" 00:26:53.547 }, 00:26:53.547 "peer_address": { 00:26:53.547 "trtype": "TCP", 00:26:53.547 "adrfam": "IPv4", 00:26:53.547 "traddr": "10.0.0.1", 00:26:53.547 "trsvcid": "43328" 00:26:53.547 }, 00:26:53.547 "auth": { 00:26:53.547 "state": "completed", 00:26:53.547 "digest": "sha384", 00:26:53.547 "dhgroup": "ffdhe6144" 00:26:53.547 } 00:26:53.547 } 00:26:53.547 ]' 00:26:53.547 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:53.805 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:53.805 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:53.805 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:53.805 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:53.805 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:53.805 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:53.805 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:54.063 17:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:54.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:54.629 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.887 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.454 00:26:55.454 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:55.454 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:55.454 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:55.711 { 00:26:55.711 "cntlid": 85, 00:26:55.711 "qid": 0, 00:26:55.711 "state": "enabled", 00:26:55.711 "thread": "nvmf_tgt_poll_group_000", 00:26:55.711 "listen_address": { 00:26:55.711 "trtype": "TCP", 00:26:55.711 "adrfam": "IPv4", 00:26:55.711 "traddr": "10.0.0.2", 00:26:55.711 "trsvcid": "4420" 00:26:55.711 }, 00:26:55.711 "peer_address": { 00:26:55.711 "trtype": "TCP", 00:26:55.711 "adrfam": "IPv4", 00:26:55.711 "traddr": "10.0.0.1", 00:26:55.711 "trsvcid": "43352" 00:26:55.711 }, 00:26:55.711 "auth": { 00:26:55.711 "state": "completed", 00:26:55.711 "digest": "sha384", 00:26:55.711 "dhgroup": "ffdhe6144" 00:26:55.711 } 00:26:55.711 } 00:26:55.711 ]' 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:55.711 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:55.712 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:55.712 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:55.712 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:55.712 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:56.279 17:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:56.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:56.845 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:57.103 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:57.670 00:26:57.670 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:57.670 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:57.670 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:57.929 { 00:26:57.929 "cntlid": 87, 00:26:57.929 "qid": 0, 00:26:57.929 "state": "enabled", 00:26:57.929 "thread": "nvmf_tgt_poll_group_000", 00:26:57.929 "listen_address": { 00:26:57.929 "trtype": "TCP", 00:26:57.929 "adrfam": "IPv4", 00:26:57.929 "traddr": "10.0.0.2", 00:26:57.929 "trsvcid": "4420" 00:26:57.929 }, 00:26:57.929 "peer_address": { 00:26:57.929 "trtype": "TCP", 00:26:57.929 "adrfam": "IPv4", 00:26:57.929 "traddr": "10.0.0.1", 00:26:57.929 "trsvcid": "43370" 00:26:57.929 }, 00:26:57.929 "auth": { 00:26:57.929 "state": "completed", 00:26:57.929 "digest": "sha384", 00:26:57.929 "dhgroup": "ffdhe6144" 00:26:57.929 } 00:26:57.929 } 00:26:57.929 ]' 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:57.929 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:58.188 17:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:59.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.137 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.701 00:26:59.701 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:59.701 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:59.701 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:59.960 { 00:26:59.960 "cntlid": 89, 00:26:59.960 "qid": 0, 00:26:59.960 "state": "enabled", 00:26:59.960 "thread": "nvmf_tgt_poll_group_000", 00:26:59.960 "listen_address": { 00:26:59.960 "trtype": "TCP", 00:26:59.960 "adrfam": "IPv4", 00:26:59.960 "traddr": "10.0.0.2", 00:26:59.960 "trsvcid": "4420" 00:26:59.960 }, 00:26:59.960 "peer_address": { 00:26:59.960 "trtype": "TCP", 00:26:59.960 "adrfam": "IPv4", 00:26:59.960 "traddr": "10.0.0.1", 00:26:59.960 "trsvcid": "50762" 00:26:59.960 }, 00:26:59.960 "auth": { 00:26:59.960 "state": "completed", 00:26:59.960 "digest": "sha384", 00:26:59.960 "dhgroup": "ffdhe8192" 00:26:59.960 } 00:26:59.960 } 00:26:59.960 ]' 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:59.960 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:00.218 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:00.218 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:00.218 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:00.477 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:01.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:01.044 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:01.610 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:27:01.610 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.611 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.183 00:27:02.183 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:02.183 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:02.183 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:02.452 { 00:27:02.452 "cntlid": 91, 00:27:02.452 "qid": 0, 00:27:02.452 "state": "enabled", 00:27:02.452 "thread": "nvmf_tgt_poll_group_000", 00:27:02.452 "listen_address": { 00:27:02.452 "trtype": "TCP", 00:27:02.452 "adrfam": "IPv4", 00:27:02.452 "traddr": "10.0.0.2", 00:27:02.452 "trsvcid": "4420" 00:27:02.452 }, 00:27:02.452 "peer_address": { 00:27:02.452 "trtype": "TCP", 00:27:02.452 "adrfam": "IPv4", 00:27:02.452 "traddr": "10.0.0.1", 00:27:02.452 "trsvcid": "50790" 00:27:02.452 }, 00:27:02.452 "auth": { 00:27:02.452 "state": "completed", 00:27:02.452 "digest": "sha384", 00:27:02.452 "dhgroup": "ffdhe8192" 00:27:02.452 } 00:27:02.452 } 00:27:02.452 ]' 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:02.452 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:02.452 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:02.452 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:02.710 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:03.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:03.644 17:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.644 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.578 00:27:04.578 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:04.578 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:04.578 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:04.578 { 00:27:04.578 "cntlid": 93, 00:27:04.578 "qid": 0, 00:27:04.578 "state": "enabled", 00:27:04.578 "thread": "nvmf_tgt_poll_group_000", 00:27:04.578 "listen_address": { 00:27:04.578 "trtype": "TCP", 00:27:04.578 "adrfam": "IPv4", 00:27:04.578 "traddr": "10.0.0.2", 00:27:04.578 "trsvcid": "4420" 00:27:04.578 }, 00:27:04.578 "peer_address": { 00:27:04.578 "trtype": "TCP", 00:27:04.578 "adrfam": "IPv4", 00:27:04.578 "traddr": "10.0.0.1", 00:27:04.578 "trsvcid": "50824" 00:27:04.578 }, 00:27:04.578 "auth": { 00:27:04.578 "state": "completed", 00:27:04.578 "digest": "sha384", 00:27:04.578 "dhgroup": "ffdhe8192" 00:27:04.578 } 00:27:04.578 } 00:27:04.578 ]' 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:04.578 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:04.835 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:04.835 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:04.835 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:04.835 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:04.835 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:05.093 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:06.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:06.027 17:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:06.962 00:27:06.962 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:06.962 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:06.962 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:07.222 { 00:27:07.222 "cntlid": 95, 00:27:07.222 "qid": 0, 00:27:07.222 "state": "enabled", 00:27:07.222 "thread": "nvmf_tgt_poll_group_000", 00:27:07.222 "listen_address": { 00:27:07.222 "trtype": "TCP", 00:27:07.222 "adrfam": "IPv4", 00:27:07.222 "traddr": "10.0.0.2", 00:27:07.222 "trsvcid": "4420" 00:27:07.222 }, 00:27:07.222 "peer_address": { 00:27:07.222 "trtype": "TCP", 00:27:07.222 "adrfam": "IPv4", 00:27:07.222 "traddr": "10.0.0.1", 00:27:07.222 "trsvcid": "50860" 00:27:07.222 }, 00:27:07.222 "auth": { 00:27:07.222 "state": "completed", 00:27:07.222 "digest": "sha384", 00:27:07.222 "dhgroup": "ffdhe8192" 00:27:07.222 } 00:27:07.222 } 00:27:07.222 ]' 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:07.222 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:07.480 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:08.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:08.412 17:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.671 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.928 00:27:08.928 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:08.928 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:08.928 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:09.186 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.186 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:09.186 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.186 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:09.186 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.186 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:09.186 { 00:27:09.186 "cntlid": 97, 00:27:09.186 "qid": 0, 00:27:09.186 "state": "enabled", 00:27:09.186 "thread": "nvmf_tgt_poll_group_000", 00:27:09.186 "listen_address": { 00:27:09.186 "trtype": "TCP", 00:27:09.186 "adrfam": "IPv4", 00:27:09.186 "traddr": "10.0.0.2", 00:27:09.186 "trsvcid": "4420" 00:27:09.186 }, 00:27:09.186 "peer_address": { 00:27:09.186 "trtype": "TCP", 00:27:09.186 "adrfam": "IPv4", 00:27:09.186 "traddr": "10.0.0.1", 00:27:09.186 "trsvcid": "57608" 00:27:09.186 }, 00:27:09.186 "auth": { 00:27:09.186 "state": "completed", 00:27:09.186 "digest": "sha512", 00:27:09.186 "dhgroup": "null" 00:27:09.186 } 00:27:09.186 } 00:27:09.186 ]' 00:27:09.186 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:09.445 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:09.445 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:09.445 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:09.445 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:09.445 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:09.445 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:09.445 17:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:09.702 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:10.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:10.327 17:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.586 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:10.843 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.843 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.843 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.101 00:27:11.101 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:11.101 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:11.101 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:11.360 { 00:27:11.360 "cntlid": 99, 00:27:11.360 "qid": 0, 00:27:11.360 "state": "enabled", 00:27:11.360 "thread": "nvmf_tgt_poll_group_000", 00:27:11.360 "listen_address": { 00:27:11.360 "trtype": "TCP", 00:27:11.360 "adrfam": "IPv4", 00:27:11.360 "traddr": "10.0.0.2", 00:27:11.360 "trsvcid": "4420" 00:27:11.360 }, 00:27:11.360 "peer_address": { 00:27:11.360 "trtype": "TCP", 00:27:11.360 "adrfam": "IPv4", 00:27:11.360 "traddr": "10.0.0.1", 00:27:11.360 "trsvcid": "57636" 00:27:11.360 }, 00:27:11.360 "auth": { 00:27:11.360 "state": "completed", 00:27:11.360 "digest": "sha512", 00:27:11.360 "dhgroup": "null" 00:27:11.360 } 00:27:11.360 } 00:27:11.360 ]' 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:11.360 17:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:11.618 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:12.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:12.559 17:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.817 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.075 00:27:13.075 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:13.075 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:13.075 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:13.333 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.333 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:13.333 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.333 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:13.591 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.591 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:13.591 { 00:27:13.591 "cntlid": 101, 00:27:13.591 "qid": 0, 00:27:13.591 "state": "enabled", 00:27:13.591 "thread": "nvmf_tgt_poll_group_000", 00:27:13.591 "listen_address": { 00:27:13.591 "trtype": "TCP", 00:27:13.591 "adrfam": "IPv4", 00:27:13.591 "traddr": "10.0.0.2", 00:27:13.591 "trsvcid": "4420" 00:27:13.591 }, 00:27:13.591 "peer_address": { 00:27:13.591 "trtype": "TCP", 00:27:13.591 "adrfam": "IPv4", 00:27:13.591 "traddr": "10.0.0.1", 00:27:13.591 "trsvcid": "57676" 00:27:13.591 }, 00:27:13.591 "auth": { 00:27:13.591 "state": "completed", 00:27:13.591 "digest": "sha512", 00:27:13.591 "dhgroup": "null" 00:27:13.591 } 00:27:13.591 } 00:27:13.591 ]' 00:27:13.591 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:13.591 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:13.591 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:13.591 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:13.591 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:13.591 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:13.591 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:13.591 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:13.850 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:14.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:14.786 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:15.045 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:15.304 00:27:15.304 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:15.304 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:15.304 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:15.563 { 00:27:15.563 "cntlid": 103, 00:27:15.563 "qid": 0, 00:27:15.563 "state": "enabled", 00:27:15.563 "thread": "nvmf_tgt_poll_group_000", 00:27:15.563 "listen_address": { 00:27:15.563 "trtype": "TCP", 00:27:15.563 "adrfam": "IPv4", 00:27:15.563 "traddr": "10.0.0.2", 00:27:15.563 "trsvcid": "4420" 00:27:15.563 }, 00:27:15.563 "peer_address": { 00:27:15.563 "trtype": "TCP", 00:27:15.563 "adrfam": "IPv4", 00:27:15.563 "traddr": "10.0.0.1", 00:27:15.563 "trsvcid": "57708" 00:27:15.563 }, 00:27:15.563 "auth": { 00:27:15.563 "state": "completed", 00:27:15.563 "digest": "sha512", 00:27:15.563 "dhgroup": "null" 00:27:15.563 } 00:27:15.563 } 00:27:15.563 ]' 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:15.563 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:15.824 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:15.824 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:15.824 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:16.082 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:16.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:16.650 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.216 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.474 00:27:17.474 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:17.474 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:17.474 17:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:17.732 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.732 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:17.732 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.732 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.732 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.732 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:17.732 { 00:27:17.732 "cntlid": 105, 00:27:17.732 "qid": 0, 00:27:17.732 "state": "enabled", 00:27:17.732 "thread": "nvmf_tgt_poll_group_000", 00:27:17.732 "listen_address": { 00:27:17.732 "trtype": "TCP", 00:27:17.732 "adrfam": "IPv4", 00:27:17.732 "traddr": "10.0.0.2", 00:27:17.732 "trsvcid": "4420" 00:27:17.732 }, 00:27:17.733 "peer_address": { 00:27:17.733 "trtype": "TCP", 00:27:17.733 "adrfam": "IPv4", 00:27:17.733 "traddr": "10.0.0.1", 00:27:17.733 "trsvcid": "57730" 00:27:17.733 }, 00:27:17.733 "auth": { 00:27:17.733 "state": "completed", 00:27:17.733 "digest": "sha512", 00:27:17.733 "dhgroup": "ffdhe2048" 00:27:17.733 } 00:27:17.733 } 00:27:17.733 ]' 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:17.733 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:18.322 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:18.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:18.890 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.148 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.407 00:27:19.407 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:19.407 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:19.407 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:19.666 { 00:27:19.666 "cntlid": 107, 00:27:19.666 "qid": 0, 00:27:19.666 "state": "enabled", 00:27:19.666 "thread": "nvmf_tgt_poll_group_000", 00:27:19.666 "listen_address": { 00:27:19.666 "trtype": "TCP", 00:27:19.666 "adrfam": "IPv4", 00:27:19.666 "traddr": "10.0.0.2", 00:27:19.666 "trsvcid": "4420" 00:27:19.666 }, 00:27:19.666 "peer_address": { 00:27:19.666 "trtype": "TCP", 00:27:19.666 "adrfam": "IPv4", 00:27:19.666 "traddr": "10.0.0.1", 00:27:19.666 "trsvcid": "49178" 00:27:19.666 }, 00:27:19.666 "auth": { 00:27:19.666 "state": "completed", 00:27:19.666 "digest": "sha512", 00:27:19.666 "dhgroup": "ffdhe2048" 00:27:19.666 } 00:27:19.666 } 00:27:19.666 ]' 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:19.666 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:19.925 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:19.925 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:19.925 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:19.925 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:19.925 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:20.184 17:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:20.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.801 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.369 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.628 00:27:21.628 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:21.628 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:21.628 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:21.885 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.885 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:21.885 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.885 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.885 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.885 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:21.885 { 00:27:21.885 "cntlid": 109, 00:27:21.885 "qid": 0, 00:27:21.885 "state": "enabled", 00:27:21.885 "thread": "nvmf_tgt_poll_group_000", 00:27:21.885 "listen_address": { 00:27:21.885 "trtype": "TCP", 00:27:21.885 "adrfam": "IPv4", 00:27:21.885 "traddr": "10.0.0.2", 00:27:21.886 "trsvcid": "4420" 00:27:21.886 }, 00:27:21.886 "peer_address": { 00:27:21.886 "trtype": "TCP", 00:27:21.886 "adrfam": "IPv4", 00:27:21.886 "traddr": "10.0.0.1", 00:27:21.886 "trsvcid": "49212" 00:27:21.886 }, 00:27:21.886 "auth": { 00:27:21.886 "state": "completed", 00:27:21.886 "digest": "sha512", 00:27:21.886 "dhgroup": "ffdhe2048" 00:27:21.886 } 00:27:21.886 } 00:27:21.886 ]' 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:21.886 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:22.143 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:22.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.711 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:22.970 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:23.538 00:27:23.538 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:23.538 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:23.538 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:23.797 { 00:27:23.797 "cntlid": 111, 00:27:23.797 "qid": 0, 00:27:23.797 "state": "enabled", 00:27:23.797 "thread": "nvmf_tgt_poll_group_000", 00:27:23.797 "listen_address": { 00:27:23.797 "trtype": "TCP", 00:27:23.797 "adrfam": "IPv4", 00:27:23.797 "traddr": "10.0.0.2", 00:27:23.797 "trsvcid": "4420" 00:27:23.797 }, 00:27:23.797 "peer_address": { 00:27:23.797 "trtype": "TCP", 00:27:23.797 "adrfam": "IPv4", 00:27:23.797 "traddr": "10.0.0.1", 00:27:23.797 "trsvcid": "49234" 00:27:23.797 }, 00:27:23.797 "auth": { 00:27:23.797 "state": "completed", 00:27:23.797 "digest": "sha512", 00:27:23.797 "dhgroup": "ffdhe2048" 00:27:23.797 } 00:27:23.797 } 00:27:23.797 ]' 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:23.797 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:24.056 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:24.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:24.992 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.993 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.251 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.510 00:27:25.510 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:25.510 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:25.510 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:25.768 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.768 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:25.768 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.768 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.769 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.769 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:25.769 { 00:27:25.769 "cntlid": 113, 00:27:25.769 "qid": 0, 00:27:25.769 "state": "enabled", 00:27:25.769 "thread": "nvmf_tgt_poll_group_000", 00:27:25.769 "listen_address": { 00:27:25.769 "trtype": "TCP", 00:27:25.769 "adrfam": "IPv4", 00:27:25.769 "traddr": "10.0.0.2", 00:27:25.769 "trsvcid": "4420" 00:27:25.769 }, 00:27:25.769 "peer_address": { 00:27:25.769 "trtype": "TCP", 00:27:25.769 "adrfam": "IPv4", 00:27:25.769 "traddr": "10.0.0.1", 00:27:25.769 "trsvcid": "49270" 00:27:25.769 }, 00:27:25.769 "auth": { 00:27:25.769 "state": "completed", 00:27:25.769 "digest": "sha512", 00:27:25.769 "dhgroup": "ffdhe3072" 00:27:25.769 } 00:27:25.769 } 00:27:25.769 ]' 00:27:25.769 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:25.769 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:25.769 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:25.769 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:25.769 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:26.027 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:26.027 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:26.027 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:26.286 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:26.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:26.854 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.112 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.681 00:27:27.681 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:27.681 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:27.681 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:27.939 { 00:27:27.939 "cntlid": 115, 00:27:27.939 "qid": 0, 00:27:27.939 "state": "enabled", 00:27:27.939 "thread": "nvmf_tgt_poll_group_000", 00:27:27.939 "listen_address": { 00:27:27.939 "trtype": "TCP", 00:27:27.939 "adrfam": "IPv4", 00:27:27.939 "traddr": "10.0.0.2", 00:27:27.939 "trsvcid": "4420" 00:27:27.939 }, 00:27:27.939 "peer_address": { 00:27:27.939 "trtype": "TCP", 00:27:27.939 "adrfam": "IPv4", 00:27:27.939 "traddr": "10.0.0.1", 00:27:27.939 "trsvcid": "49298" 00:27:27.939 }, 00:27:27.939 "auth": { 00:27:27.939 "state": "completed", 00:27:27.939 "digest": "sha512", 00:27:27.939 "dhgroup": "ffdhe3072" 00:27:27.939 } 00:27:27.939 } 00:27:27.939 ]' 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:27.939 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:28.506 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:29.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:29.073 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.329 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.587 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.587 17:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.845 00:27:29.845 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:29.845 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:29.845 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:30.103 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.103 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:30.103 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.103 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.103 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.103 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:30.103 { 00:27:30.103 "cntlid": 117, 00:27:30.103 "qid": 0, 00:27:30.103 "state": "enabled", 00:27:30.103 "thread": "nvmf_tgt_poll_group_000", 00:27:30.103 "listen_address": { 00:27:30.103 "trtype": "TCP", 00:27:30.103 "adrfam": "IPv4", 00:27:30.103 "traddr": "10.0.0.2", 00:27:30.103 "trsvcid": "4420" 00:27:30.103 }, 00:27:30.103 "peer_address": { 00:27:30.103 "trtype": "TCP", 00:27:30.103 "adrfam": "IPv4", 00:27:30.103 "traddr": "10.0.0.1", 00:27:30.103 "trsvcid": "36862" 00:27:30.103 }, 00:27:30.103 "auth": { 00:27:30.103 "state": "completed", 00:27:30.103 "digest": "sha512", 00:27:30.103 "dhgroup": "ffdhe3072" 00:27:30.103 } 00:27:30.103 } 00:27:30.103 ]' 00:27:30.103 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:30.360 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:30.360 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:30.360 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:30.360 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:30.360 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:30.360 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:30.360 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:30.617 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:31.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:31.550 17:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:31.808 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:32.067 00:27:32.345 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:32.345 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:32.345 17:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:32.603 { 00:27:32.603 "cntlid": 119, 00:27:32.603 "qid": 0, 00:27:32.603 "state": "enabled", 00:27:32.603 "thread": "nvmf_tgt_poll_group_000", 00:27:32.603 "listen_address": { 00:27:32.603 "trtype": "TCP", 00:27:32.603 "adrfam": "IPv4", 00:27:32.603 "traddr": "10.0.0.2", 00:27:32.603 "trsvcid": "4420" 00:27:32.603 }, 00:27:32.603 "peer_address": { 00:27:32.603 "trtype": "TCP", 00:27:32.603 "adrfam": "IPv4", 00:27:32.603 "traddr": "10.0.0.1", 00:27:32.603 "trsvcid": "36888" 00:27:32.603 }, 00:27:32.603 "auth": { 00:27:32.603 "state": "completed", 00:27:32.603 "digest": "sha512", 00:27:32.603 "dhgroup": "ffdhe3072" 00:27:32.603 } 00:27:32.603 } 00:27:32.603 ]' 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:32.603 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:32.860 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:32.860 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:32.860 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:33.118 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:33.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.683 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.942 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.507 00:27:34.507 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:34.507 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:34.507 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:34.765 { 00:27:34.765 "cntlid": 121, 00:27:34.765 "qid": 0, 00:27:34.765 "state": "enabled", 00:27:34.765 "thread": "nvmf_tgt_poll_group_000", 00:27:34.765 "listen_address": { 00:27:34.765 "trtype": "TCP", 00:27:34.765 "adrfam": "IPv4", 00:27:34.765 "traddr": "10.0.0.2", 00:27:34.765 "trsvcid": "4420" 00:27:34.765 }, 00:27:34.765 "peer_address": { 00:27:34.765 "trtype": "TCP", 00:27:34.765 "adrfam": "IPv4", 00:27:34.765 "traddr": "10.0.0.1", 00:27:34.765 "trsvcid": "36912" 00:27:34.765 }, 00:27:34.765 "auth": { 00:27:34.765 "state": "completed", 00:27:34.765 "digest": "sha512", 00:27:34.765 "dhgroup": "ffdhe4096" 00:27:34.765 } 00:27:34.765 } 00:27:34.765 ]' 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:34.765 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:35.333 17:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:35.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:35.901 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.160 17:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.418 00:27:36.418 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:36.418 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:36.418 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:36.983 { 00:27:36.983 "cntlid": 123, 00:27:36.983 "qid": 0, 00:27:36.983 "state": "enabled", 00:27:36.983 "thread": "nvmf_tgt_poll_group_000", 00:27:36.983 "listen_address": { 00:27:36.983 "trtype": "TCP", 00:27:36.983 "adrfam": "IPv4", 00:27:36.983 "traddr": "10.0.0.2", 00:27:36.983 "trsvcid": "4420" 00:27:36.983 }, 00:27:36.983 "peer_address": { 00:27:36.983 "trtype": "TCP", 00:27:36.983 "adrfam": "IPv4", 00:27:36.983 "traddr": "10.0.0.1", 00:27:36.983 "trsvcid": "36952" 00:27:36.983 }, 00:27:36.983 "auth": { 00:27:36.983 "state": "completed", 00:27:36.983 "digest": "sha512", 00:27:36.983 "dhgroup": "ffdhe4096" 00:27:36.983 } 00:27:36.983 } 00:27:36.983 ]' 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:36.983 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:37.241 17:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:38.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.176 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.434 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:38.693 { 00:27:38.693 "cntlid": 125, 00:27:38.693 "qid": 0, 00:27:38.693 "state": "enabled", 00:27:38.693 "thread": "nvmf_tgt_poll_group_000", 00:27:38.693 "listen_address": { 00:27:38.693 "trtype": "TCP", 00:27:38.693 "adrfam": "IPv4", 00:27:38.693 "traddr": "10.0.0.2", 00:27:38.693 "trsvcid": "4420" 00:27:38.693 }, 00:27:38.693 "peer_address": { 00:27:38.693 "trtype": "TCP", 00:27:38.693 "adrfam": "IPv4", 00:27:38.693 "traddr": "10.0.0.1", 00:27:38.693 "trsvcid": "46388" 00:27:38.693 }, 00:27:38.693 "auth": { 00:27:38.693 "state": "completed", 00:27:38.693 "digest": "sha512", 00:27:38.693 "dhgroup": "ffdhe4096" 00:27:38.693 } 00:27:38.693 } 00:27:38.693 ]' 00:27:38.693 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:38.952 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:38.952 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:38.952 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:38.952 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:38.952 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:38.952 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:38.952 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:39.211 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:39.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:39.808 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:40.067 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:40.326 00:27:40.326 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:40.326 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:40.326 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:40.586 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.586 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:40.586 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.586 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:40.586 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.586 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:40.586 { 00:27:40.586 "cntlid": 127, 00:27:40.586 "qid": 0, 00:27:40.586 "state": "enabled", 00:27:40.586 "thread": "nvmf_tgt_poll_group_000", 00:27:40.586 "listen_address": { 00:27:40.586 "trtype": "TCP", 00:27:40.586 "adrfam": "IPv4", 00:27:40.586 "traddr": "10.0.0.2", 00:27:40.586 "trsvcid": "4420" 00:27:40.586 }, 00:27:40.586 "peer_address": { 00:27:40.586 "trtype": "TCP", 00:27:40.586 "adrfam": "IPv4", 00:27:40.586 "traddr": "10.0.0.1", 00:27:40.586 "trsvcid": "46400" 00:27:40.586 }, 00:27:40.586 "auth": { 00:27:40.586 "state": "completed", 00:27:40.586 "digest": "sha512", 00:27:40.586 "dhgroup": "ffdhe4096" 00:27:40.586 } 00:27:40.586 } 00:27:40.586 ]' 00:27:40.586 17:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:40.586 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:40.586 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:40.586 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:40.586 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:40.586 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:40.586 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:40.586 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:40.844 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:41.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.412 17:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.671 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.929 00:27:41.929 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:41.929 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:41.929 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:42.187 { 00:27:42.187 "cntlid": 129, 00:27:42.187 "qid": 0, 00:27:42.187 "state": "enabled", 00:27:42.187 "thread": "nvmf_tgt_poll_group_000", 00:27:42.187 "listen_address": { 00:27:42.187 "trtype": "TCP", 00:27:42.187 "adrfam": "IPv4", 00:27:42.187 "traddr": "10.0.0.2", 00:27:42.187 "trsvcid": "4420" 00:27:42.187 }, 00:27:42.187 "peer_address": { 00:27:42.187 "trtype": "TCP", 00:27:42.187 "adrfam": "IPv4", 00:27:42.187 "traddr": "10.0.0.1", 00:27:42.187 "trsvcid": "46430" 00:27:42.187 }, 00:27:42.187 "auth": { 00:27:42.187 "state": "completed", 00:27:42.187 "digest": "sha512", 00:27:42.187 "dhgroup": "ffdhe6144" 00:27:42.187 } 00:27:42.187 } 00:27:42.187 ]' 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:42.187 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:42.188 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:42.188 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:42.188 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:42.445 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:42.445 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:42.445 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:42.445 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:43.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.377 17:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.976 00:27:43.976 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:43.976 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:43.976 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:44.234 { 00:27:44.234 "cntlid": 131, 00:27:44.234 "qid": 0, 00:27:44.234 "state": "enabled", 00:27:44.234 "thread": "nvmf_tgt_poll_group_000", 00:27:44.234 "listen_address": { 00:27:44.234 "trtype": "TCP", 00:27:44.234 "adrfam": "IPv4", 00:27:44.234 "traddr": "10.0.0.2", 00:27:44.234 "trsvcid": "4420" 00:27:44.234 }, 00:27:44.234 "peer_address": { 00:27:44.234 "trtype": "TCP", 00:27:44.234 "adrfam": "IPv4", 00:27:44.234 "traddr": "10.0.0.1", 00:27:44.234 "trsvcid": "46464" 00:27:44.234 }, 00:27:44.234 "auth": { 00:27:44.234 "state": "completed", 00:27:44.234 "digest": "sha512", 00:27:44.234 "dhgroup": "ffdhe6144" 00:27:44.234 } 00:27:44.234 } 00:27:44.234 ]' 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:44.234 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:44.493 17:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:27:45.060 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:45.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:45.318 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:45.318 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.318 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.318 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.318 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:45.318 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.318 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.576 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.577 17:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.835 00:27:45.835 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:45.835 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:45.835 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:46.402 { 00:27:46.402 "cntlid": 133, 00:27:46.402 "qid": 0, 00:27:46.402 "state": "enabled", 00:27:46.402 "thread": "nvmf_tgt_poll_group_000", 00:27:46.402 "listen_address": { 00:27:46.402 "trtype": "TCP", 00:27:46.402 "adrfam": "IPv4", 00:27:46.402 "traddr": "10.0.0.2", 00:27:46.402 "trsvcid": "4420" 00:27:46.402 }, 00:27:46.402 "peer_address": { 00:27:46.402 "trtype": "TCP", 00:27:46.402 "adrfam": "IPv4", 00:27:46.402 "traddr": "10.0.0.1", 00:27:46.402 "trsvcid": "46482" 00:27:46.402 }, 00:27:46.402 "auth": { 00:27:46.402 "state": "completed", 00:27:46.402 "digest": "sha512", 00:27:46.402 "dhgroup": "ffdhe6144" 00:27:46.402 } 00:27:46.402 } 00:27:46.402 ]' 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:46.402 17:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:46.661 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:47.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.228 17:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:47.794 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:48.052 00:27:48.052 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:48.052 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:48.052 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:48.310 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.310 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:48.310 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.310 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.310 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:48.311 { 00:27:48.311 "cntlid": 135, 00:27:48.311 "qid": 0, 00:27:48.311 "state": "enabled", 00:27:48.311 "thread": "nvmf_tgt_poll_group_000", 00:27:48.311 "listen_address": { 00:27:48.311 "trtype": "TCP", 00:27:48.311 "adrfam": "IPv4", 00:27:48.311 "traddr": "10.0.0.2", 00:27:48.311 "trsvcid": "4420" 00:27:48.311 }, 00:27:48.311 "peer_address": { 00:27:48.311 "trtype": "TCP", 00:27:48.311 "adrfam": "IPv4", 00:27:48.311 "traddr": "10.0.0.1", 00:27:48.311 "trsvcid": "46422" 00:27:48.311 }, 00:27:48.311 "auth": { 00:27:48.311 "state": "completed", 00:27:48.311 "digest": "sha512", 00:27:48.311 "dhgroup": "ffdhe6144" 00:27:48.311 } 00:27:48.311 } 00:27:48.311 ]' 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:48.311 17:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:48.878 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:49.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.445 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.703 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.638 00:27:50.638 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:50.638 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:50.638 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:50.638 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.638 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:50.638 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.638 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.638 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.638 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:50.638 { 00:27:50.638 "cntlid": 137, 00:27:50.638 "qid": 0, 00:27:50.638 "state": "enabled", 00:27:50.638 "thread": "nvmf_tgt_poll_group_000", 00:27:50.638 "listen_address": { 00:27:50.638 "trtype": "TCP", 00:27:50.638 "adrfam": "IPv4", 00:27:50.638 "traddr": "10.0.0.2", 00:27:50.638 "trsvcid": "4420" 00:27:50.638 }, 00:27:50.638 "peer_address": { 00:27:50.638 "trtype": "TCP", 00:27:50.638 "adrfam": "IPv4", 00:27:50.638 "traddr": "10.0.0.1", 00:27:50.638 "trsvcid": "46456" 00:27:50.638 }, 00:27:50.638 "auth": { 00:27:50.638 "state": "completed", 00:27:50.638 "digest": "sha512", 00:27:50.638 "dhgroup": "ffdhe8192" 00:27:50.638 } 00:27:50.638 } 00:27:50.638 ]' 00:27:50.638 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:50.897 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:50.897 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:50.897 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:50.897 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:50.897 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:50.897 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:50.897 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:51.156 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:52.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.090 17:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.028 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:53.028 { 00:27:53.028 "cntlid": 139, 00:27:53.028 "qid": 0, 00:27:53.028 "state": "enabled", 00:27:53.028 "thread": "nvmf_tgt_poll_group_000", 00:27:53.028 "listen_address": { 00:27:53.028 "trtype": "TCP", 00:27:53.028 "adrfam": "IPv4", 00:27:53.028 "traddr": "10.0.0.2", 00:27:53.028 "trsvcid": "4420" 00:27:53.028 }, 00:27:53.028 "peer_address": { 00:27:53.028 "trtype": "TCP", 00:27:53.028 "adrfam": "IPv4", 00:27:53.028 "traddr": "10.0.0.1", 00:27:53.028 "trsvcid": "46480" 00:27:53.028 }, 00:27:53.028 "auth": { 00:27:53.028 "state": "completed", 00:27:53.028 "digest": "sha512", 00:27:53.028 "dhgroup": "ffdhe8192" 00:27:53.028 } 00:27:53.028 } 00:27:53.028 ]' 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:53.028 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:53.287 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:53.287 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:53.287 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:53.287 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:53.287 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:53.545 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:01:ZGMyNmIwOWMzZWVhNDQ1NzdiMDBkOTFmOWYwMzhiOGJumg+R: --dhchap-ctrl-secret DHHC-1:02:NWRmNWZkMzcyOTU1YmMwZjk2NmU1NDEzMjk5Y2Q5ZjQ1OTMyZmQzMGYzN2YzYzk0kdxQvQ==: 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:54.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:54.111 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.678 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.244 00:27:55.244 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:55.244 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:55.244 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:55.502 { 00:27:55.502 "cntlid": 141, 00:27:55.502 "qid": 0, 00:27:55.502 "state": "enabled", 00:27:55.502 "thread": "nvmf_tgt_poll_group_000", 00:27:55.502 "listen_address": { 00:27:55.502 "trtype": "TCP", 00:27:55.502 "adrfam": "IPv4", 00:27:55.502 "traddr": "10.0.0.2", 00:27:55.502 "trsvcid": "4420" 00:27:55.502 }, 00:27:55.502 "peer_address": { 00:27:55.502 "trtype": "TCP", 00:27:55.502 "adrfam": "IPv4", 00:27:55.502 "traddr": "10.0.0.1", 00:27:55.502 "trsvcid": "46502" 00:27:55.502 }, 00:27:55.502 "auth": { 00:27:55.502 "state": "completed", 00:27:55.502 "digest": "sha512", 00:27:55.502 "dhgroup": "ffdhe8192" 00:27:55.502 } 00:27:55.502 } 00:27:55.502 ]' 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:55.502 17:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:55.502 17:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:55.502 17:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:55.502 17:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:55.502 17:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:55.503 17:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:55.760 17:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:02:OWEzNDcwNDhhZjRlMjYxNWI2MzIzMzBhMmYzZGNlNmMxMmFjNDVmMDAzYmVlNDM38gNs3A==: --dhchap-ctrl-secret DHHC-1:01:OTk5OTdjOTM3ODBkNzQ4ZDE4ZmIyNzQ5NjNkOTMyYjPAEg6u: 00:27:56.716 17:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:56.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:56.716 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:57.283 00:27:57.283 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:57.283 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:57.283 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:57.541 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.541 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:57.541 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.541 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.541 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.541 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:57.541 { 00:27:57.541 "cntlid": 143, 00:27:57.541 "qid": 0, 00:27:57.541 "state": "enabled", 00:27:57.541 "thread": "nvmf_tgt_poll_group_000", 00:27:57.541 "listen_address": { 00:27:57.541 "trtype": "TCP", 00:27:57.541 "adrfam": "IPv4", 00:27:57.541 "traddr": "10.0.0.2", 00:27:57.541 "trsvcid": "4420" 00:27:57.541 }, 00:27:57.541 "peer_address": { 00:27:57.541 "trtype": "TCP", 00:27:57.541 "adrfam": "IPv4", 00:27:57.541 "traddr": "10.0.0.1", 00:27:57.541 "trsvcid": "46518" 00:27:57.541 }, 00:27:57.541 "auth": { 00:27:57.541 "state": "completed", 00:27:57.541 "digest": "sha512", 00:27:57.541 "dhgroup": "ffdhe8192" 00:27:57.541 } 00:27:57.541 } 00:27:57.541 ]' 00:27:57.541 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:57.799 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:57.799 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:57.799 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:57.799 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:57.799 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:57.799 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:57.799 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:58.058 17:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:58.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:58.624 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.882 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.448 00:27:59.448 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:59.448 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:59.448 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:59.706 { 00:27:59.706 "cntlid": 145, 00:27:59.706 "qid": 0, 00:27:59.706 "state": "enabled", 00:27:59.706 "thread": "nvmf_tgt_poll_group_000", 00:27:59.706 "listen_address": { 00:27:59.706 "trtype": "TCP", 00:27:59.706 "adrfam": "IPv4", 00:27:59.706 "traddr": "10.0.0.2", 00:27:59.706 "trsvcid": "4420" 00:27:59.706 }, 00:27:59.706 "peer_address": { 00:27:59.706 "trtype": "TCP", 00:27:59.706 "adrfam": "IPv4", 00:27:59.706 "traddr": "10.0.0.1", 00:27:59.706 "trsvcid": "45028" 00:27:59.706 }, 00:27:59.706 "auth": { 00:27:59.706 "state": "completed", 00:27:59.706 "digest": "sha512", 00:27:59.706 "dhgroup": "ffdhe8192" 00:27:59.706 } 00:27:59.706 } 00:27:59.706 ]' 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:59.706 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:59.963 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:00:Y2VmNjM3YmIyNGVjYmVkMzQwNDJjZGJhMTI0OTBlNDIyZDNlOTQ2MzlmNmU2ZGVh1z4waw==: --dhchap-ctrl-secret DHHC-1:03:NDdiNTA1OTNkMjQ1MGE3YWYwNDk3ZTUwYjJlNDQwZjY4MjRhMTJjOTNiMDRjZDliYWFkYmM2YmUzODdkOTVjNIByrwI=: 00:28:00.932 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:00.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:00.932 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:00.932 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.932 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.932 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.932 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:00.933 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:01.498 request: 00:28:01.498 { 00:28:01.498 "name": "nvme0", 00:28:01.498 "trtype": "tcp", 00:28:01.498 "traddr": "10.0.0.2", 00:28:01.498 "adrfam": "ipv4", 00:28:01.498 "trsvcid": "4420", 00:28:01.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:01.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45", 00:28:01.498 "prchk_reftag": false, 00:28:01.498 "prchk_guard": false, 00:28:01.498 "hdgst": false, 00:28:01.498 "ddgst": false, 00:28:01.498 "dhchap_key": "key2", 00:28:01.498 "method": "bdev_nvme_attach_controller", 00:28:01.498 "req_id": 1 00:28:01.498 } 00:28:01.498 Got JSON-RPC error response 00:28:01.498 response: 00:28:01.498 { 00:28:01.498 "code": -5, 00:28:01.498 "message": "Input/output error" 00:28:01.498 } 00:28:01.498 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:01.498 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:01.498 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.499 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.066 request: 00:28:02.066 { 00:28:02.066 "name": "nvme0", 00:28:02.066 "trtype": "tcp", 00:28:02.066 "traddr": "10.0.0.2", 00:28:02.066 "adrfam": "ipv4", 00:28:02.066 "trsvcid": "4420", 00:28:02.066 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:02.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45", 00:28:02.066 "prchk_reftag": false, 00:28:02.066 "prchk_guard": false, 00:28:02.066 "hdgst": false, 00:28:02.066 "ddgst": false, 00:28:02.066 "dhchap_key": "key1", 00:28:02.066 "dhchap_ctrlr_key": "ckey2", 00:28:02.066 "method": "bdev_nvme_attach_controller", 00:28:02.066 "req_id": 1 00:28:02.066 } 00:28:02.066 Got JSON-RPC error response 00:28:02.066 response: 00:28:02.066 { 00:28:02.066 "code": -5, 00:28:02.066 "message": "Input/output error" 00:28:02.066 } 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key1 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.066 17:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.632 request: 00:28:02.632 { 00:28:02.632 "name": "nvme0", 00:28:02.632 "trtype": "tcp", 00:28:02.632 "traddr": "10.0.0.2", 00:28:02.632 "adrfam": "ipv4", 00:28:02.632 "trsvcid": "4420", 00:28:02.632 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:02.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45", 00:28:02.632 "prchk_reftag": false, 00:28:02.632 "prchk_guard": false, 00:28:02.632 "hdgst": false, 00:28:02.632 "ddgst": false, 00:28:02.632 "dhchap_key": "key1", 00:28:02.632 "dhchap_ctrlr_key": "ckey1", 00:28:02.632 "method": "bdev_nvme_attach_controller", 00:28:02.632 "req_id": 1 00:28:02.632 } 00:28:02.632 Got JSON-RPC error response 00:28:02.632 response: 00:28:02.632 { 00:28:02.632 "code": -5, 00:28:02.632 "message": "Input/output error" 00:28:02.632 } 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 73833 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 73833 ']' 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 73833 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73833 00:28:02.632 killing process with pid 73833 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73833' 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 73833 00:28:02.632 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 73833 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=76844 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 76844 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 76844 ']' 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.007 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.382 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.382 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:28:05.382 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.382 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.382 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.382 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.382 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:28:05.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 76844 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 76844 ']' 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.383 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:05.950 17:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:06.515 00:28:06.515 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:06.515 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:06.515 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:06.774 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.774 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:06.774 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.774 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.774 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.774 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:06.774 { 00:28:06.774 "cntlid": 1, 00:28:06.774 "qid": 0, 00:28:06.774 "state": "enabled", 00:28:06.774 "thread": "nvmf_tgt_poll_group_000", 00:28:06.774 "listen_address": { 00:28:06.774 "trtype": "TCP", 00:28:06.774 "adrfam": "IPv4", 00:28:06.774 "traddr": "10.0.0.2", 00:28:06.774 "trsvcid": "4420" 00:28:06.774 }, 00:28:06.774 "peer_address": { 00:28:06.774 "trtype": "TCP", 00:28:06.774 "adrfam": "IPv4", 00:28:06.774 "traddr": "10.0.0.1", 00:28:06.774 "trsvcid": "45084" 00:28:06.774 }, 00:28:06.774 "auth": { 00:28:06.774 "state": "completed", 00:28:06.774 "digest": "sha512", 00:28:06.774 "dhgroup": "ffdhe8192" 00:28:06.774 } 00:28:06.774 } 00:28:06.774 ]' 00:28:06.774 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:07.032 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:07.032 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:07.032 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:07.032 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:07.032 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:07.032 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:07.032 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:07.291 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-secret DHHC-1:03:NTJlY2FlNjdiNDgxOWUxZmFjYTU1NmFkYjdkN2FmZTFmYTFkNThmZDFhN2U4MDQyMWFmY2JlODdkZTU0MGU3NXmjHOs=: 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:07.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --dhchap-key key3 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:28:07.922 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.180 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.439 request: 00:28:08.439 { 00:28:08.439 "name": "nvme0", 00:28:08.439 "trtype": "tcp", 00:28:08.439 "traddr": "10.0.0.2", 00:28:08.439 "adrfam": "ipv4", 00:28:08.439 "trsvcid": "4420", 00:28:08.439 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:08.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45", 00:28:08.439 "prchk_reftag": false, 00:28:08.439 "prchk_guard": false, 00:28:08.439 "hdgst": false, 00:28:08.439 "ddgst": false, 00:28:08.439 "dhchap_key": "key3", 00:28:08.439 "method": "bdev_nvme_attach_controller", 00:28:08.439 "req_id": 1 00:28:08.439 } 00:28:08.439 Got JSON-RPC error response 00:28:08.439 response: 00:28:08.439 { 00:28:08.439 "code": -5, 00:28:08.439 "message": "Input/output error" 00:28:08.439 } 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:28:08.439 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.699 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.957 request: 00:28:08.957 { 00:28:08.957 "name": "nvme0", 00:28:08.957 "trtype": "tcp", 00:28:08.957 "traddr": "10.0.0.2", 00:28:08.957 "adrfam": "ipv4", 00:28:08.957 "trsvcid": "4420", 00:28:08.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:08.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45", 00:28:08.957 "prchk_reftag": false, 00:28:08.957 "prchk_guard": false, 00:28:08.957 "hdgst": false, 00:28:08.957 "ddgst": false, 00:28:08.957 "dhchap_key": "key3", 00:28:08.957 "method": "bdev_nvme_attach_controller", 00:28:08.957 "req_id": 1 00:28:08.957 } 00:28:08.957 Got JSON-RPC error response 00:28:08.957 response: 00:28:08.957 { 00:28:08.957 "code": -5, 00:28:08.957 "message": "Input/output error" 00:28:08.957 } 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:08.957 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:08.958 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:28:09.216 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:28:09.475 request: 00:28:09.475 { 00:28:09.475 "name": "nvme0", 00:28:09.475 "trtype": "tcp", 00:28:09.475 "traddr": "10.0.0.2", 00:28:09.475 "adrfam": "ipv4", 00:28:09.475 "trsvcid": "4420", 00:28:09.475 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:09.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45", 00:28:09.475 "prchk_reftag": false, 00:28:09.475 "prchk_guard": false, 00:28:09.475 "hdgst": false, 00:28:09.475 "ddgst": false, 00:28:09.475 "dhchap_key": "key0", 00:28:09.475 "dhchap_ctrlr_key": "key1", 00:28:09.475 "method": "bdev_nvme_attach_controller", 00:28:09.475 "req_id": 1 00:28:09.475 } 00:28:09.475 Got JSON-RPC error response 00:28:09.475 response: 00:28:09.475 { 00:28:09.475 "code": -5, 00:28:09.475 "message": "Input/output error" 00:28:09.475 } 00:28:09.475 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:09.475 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.475 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.475 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.475 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:09.475 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:09.733 00:28:09.733 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:28:09.733 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:09.733 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:28:09.991 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.991 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:09.991 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 73865 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 73865 ']' 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 73865 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73865 00:28:10.249 killing process with pid 73865 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73865' 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 73865 00:28:10.249 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 73865 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.552 rmmod nvme_tcp 00:28:13.552 rmmod nvme_fabrics 00:28:13.552 rmmod nvme_keyring 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 76844 ']' 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 76844 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 76844 ']' 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 76844 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76844 00:28:13.552 killing process with pid 76844 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76844' 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 76844 00:28:13.552 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 76844 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.498 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.u0a /tmp/spdk.key-sha256.HEb /tmp/spdk.key-sha384.ZDy /tmp/spdk.key-sha512.lTH /tmp/spdk.key-sha512.XV1 /tmp/spdk.key-sha384.cKj /tmp/spdk.key-sha256.siE '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:28:14.758 00:28:14.758 real 2m55.602s 00:28:14.758 user 6m49.418s 00:28:14.758 sys 0m31.941s 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.758 ************************************ 00:28:14.758 END TEST nvmf_auth_target 00:28:14.758 ************************************ 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:14.758 ************************************ 00:28:14.758 START TEST nvmf_bdevio_no_huge 00:28:14.758 ************************************ 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:28:14.758 * Looking for test storage... 00:28:14.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:14.758 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:14.759 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:15.018 Cannot find device "nvmf_tgt_br" 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:15.018 Cannot find device "nvmf_tgt_br2" 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:15.018 Cannot find device "nvmf_tgt_br" 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:15.018 Cannot find device "nvmf_tgt_br2" 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:15.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:15.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:15.018 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:15.019 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:15.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:28:15.278 00:28:15.278 --- 10.0.0.2 ping statistics --- 00:28:15.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.278 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:15.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:15.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:28:15.278 00:28:15.278 --- 10.0.0.3 ping statistics --- 00:28:15.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.278 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:15.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:28:15.278 00:28:15.278 --- 10.0.0.1 ping statistics --- 00:28:15.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.278 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=77212 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 77212 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 77212 ']' 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.278 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:15.278 [2024-07-22 17:06:16.877540] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:15.278 [2024-07-22 17:06:16.877935] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:28:15.536 [2024-07-22 17:06:17.104392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.103 [2024-07-22 17:06:17.498496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.103 [2024-07-22 17:06:17.498558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.103 [2024-07-22 17:06:17.498576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.103 [2024-07-22 17:06:17.498589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.103 [2024-07-22 17:06:17.498604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.103 [2024-07-22 17:06:17.498870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.103 [2024-07-22 17:06:17.499400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:16.103 [2024-07-22 17:06:17.499646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.103 [2024-07-22 17:06:17.499676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:16.361 [2024-07-22 17:06:17.728456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:16.361 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.361 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:28:16.361 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.361 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:16.361 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.619 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.619 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 [2024-07-22 17:06:17.987787] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 Malloc0 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 [2024-07-22 17:06:18.111218] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:16.619 { 00:28:16.619 "params": { 00:28:16.619 "name": "Nvme$subsystem", 00:28:16.619 "trtype": "$TEST_TRANSPORT", 00:28:16.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.619 "adrfam": "ipv4", 00:28:16.619 "trsvcid": "$NVMF_PORT", 00:28:16.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.619 "hdgst": ${hdgst:-false}, 00:28:16.619 "ddgst": ${ddgst:-false} 00:28:16.619 }, 00:28:16.619 "method": "bdev_nvme_attach_controller" 00:28:16.619 } 00:28:16.619 EOF 00:28:16.619 )") 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:28:16.619 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:16.619 "params": { 00:28:16.619 "name": "Nvme1", 00:28:16.619 "trtype": "tcp", 00:28:16.619 "traddr": "10.0.0.2", 00:28:16.619 "adrfam": "ipv4", 00:28:16.619 "trsvcid": "4420", 00:28:16.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:16.619 "hdgst": false, 00:28:16.619 "ddgst": false 00:28:16.619 }, 00:28:16.619 "method": "bdev_nvme_attach_controller" 00:28:16.619 }' 00:28:16.619 [2024-07-22 17:06:18.226458] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:16.619 [2024-07-22 17:06:18.226857] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77254 ] 00:28:16.876 [2024-07-22 17:06:18.436478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:17.443 [2024-07-22 17:06:18.766504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.443 [2024-07-22 17:06:18.766596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.443 [2024-07-22 17:06:18.766626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.443 [2024-07-22 17:06:19.005013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:17.701 I/O targets: 00:28:17.701 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:28:17.701 00:28:17.701 00:28:17.701 CUnit - A unit testing framework for C - Version 2.1-3 00:28:17.701 http://cunit.sourceforge.net/ 00:28:17.701 00:28:17.701 00:28:17.701 Suite: bdevio tests on: Nvme1n1 00:28:17.701 Test: blockdev write read block ...passed 00:28:17.701 Test: blockdev write zeroes read block ...passed 00:28:17.701 Test: blockdev write zeroes read no split ...passed 00:28:17.960 Test: blockdev write zeroes read split ...passed 00:28:17.960 Test: blockdev write zeroes read split partial ...passed 00:28:17.960 Test: blockdev reset ...[2024-07-22 17:06:19.360758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:17.960 [2024-07-22 17:06:19.360922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:28:17.960 passed 00:28:17.960 Test: blockdev write read 8 blocks ...[2024-07-22 17:06:19.379496] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:17.960 passed 00:28:17.960 Test: blockdev write read size > 128k ...passed 00:28:17.960 Test: blockdev write read invalid size ...passed 00:28:17.960 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:17.960 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:17.960 Test: blockdev write read max offset ...passed 00:28:17.960 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:17.960 Test: blockdev writev readv 8 blocks ...passed 00:28:17.960 Test: blockdev writev readv 30 x 1block ...passed 00:28:17.960 Test: blockdev writev readv block ...passed 00:28:17.960 Test: blockdev writev readv size > 128k ...passed 00:28:17.960 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:17.960 Test: blockdev comparev and writev ...[2024-07-22 17:06:19.390654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.390721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.390748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.390766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.391108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.391136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.391157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.391174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.391499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.391527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.391548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.391567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.392009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.392045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.392067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:17.960 [2024-07-22 17:06:19.392084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:17.960 passed 00:28:17.960 Test: blockdev nvme passthru rw ...passed 00:28:17.960 Test: blockdev nvme passthru vendor specific ...passed 00:28:17.960 Test: blockdev nvme admin passthru ...[2024-07-22 17:06:19.392831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.960 [2024-07-22 17:06:19.392875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.393005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.960 [2024-07-22 17:06:19.393030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.393146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.960 [2024-07-22 17:06:19.393172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:17.960 [2024-07-22 17:06:19.393294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.960 [2024-07-22 17:06:19.393319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:17.960 passed 00:28:17.960 Test: blockdev copy ...passed 00:28:17.960 00:28:17.960 Run Summary: Type Total Ran Passed Failed Inactive 00:28:17.960 suites 1 1 n/a 0 0 00:28:17.960 tests 23 23 23 0 0 00:28:17.960 asserts 152 152 152 0 n/a 00:28:17.960 00:28:17.960 Elapsed time = 0.295 seconds 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:18.896 rmmod nvme_tcp 00:28:18.896 rmmod nvme_fabrics 00:28:18.896 rmmod nvme_keyring 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 77212 ']' 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 77212 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 77212 ']' 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 77212 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77212 00:28:18.896 killing process with pid 77212 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77212' 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 77212 00:28:18.896 17:06:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 77212 00:28:19.829 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:19.830 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:19.830 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:19.830 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.830 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:19.830 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.830 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.830 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:20.088 00:28:20.088 real 0m5.235s 00:28:20.088 user 0m18.565s 00:28:20.088 sys 0m1.773s 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:20.088 ************************************ 00:28:20.088 END TEST nvmf_bdevio_no_huge 00:28:20.088 ************************************ 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.088 ************************************ 00:28:20.088 START TEST nvmf_tls 00:28:20.088 ************************************ 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:28:20.088 * Looking for test storage... 00:28:20.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:20.088 Cannot find device "nvmf_tgt_br" 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:20.088 Cannot find device "nvmf_tgt_br2" 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:20.088 Cannot find device "nvmf_tgt_br" 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:20.088 Cannot find device "nvmf_tgt_br2" 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:28:20.088 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:20.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:20.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:20.347 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:20.606 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:20.606 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:20.606 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:20.606 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:20.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:28:20.606 00:28:20.606 --- 10.0.0.2 ping statistics --- 00:28:20.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.606 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:28:20.606 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:20.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:20.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:28:20.606 00:28:20.606 --- 10.0.0.3 ping statistics --- 00:28:20.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.606 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:28:20.606 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:20.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:20.606 00:28:20.606 --- 10.0.0.1 ping statistics --- 00:28:20.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.606 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:20.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77470 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77470 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77470 ']' 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.606 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:20.606 [2024-07-22 17:06:22.132969] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:20.606 [2024-07-22 17:06:22.133092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.865 [2024-07-22 17:06:22.306940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.123 [2024-07-22 17:06:22.643744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.123 [2024-07-22 17:06:22.643829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.123 [2024-07-22 17:06:22.643845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.123 [2024-07-22 17:06:22.643859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.123 [2024-07-22 17:06:22.643870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.123 [2024-07-22 17:06:22.643925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:28:21.690 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:28:21.948 true 00:28:21.948 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:21.948 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:28:22.206 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:28:22.206 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:28:22.206 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:28:22.465 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:22.465 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:28:22.723 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:28:22.723 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:28:22.723 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:28:22.982 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:28:22.982 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:23.241 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:28:23.241 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:28:23.241 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:23.241 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:28:23.500 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:28:23.500 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:28:23.500 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:28:23.759 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:23.759 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:28:24.017 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:28:24.017 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:28:24.017 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:28:24.017 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:28:24.017 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:28:24.586 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.mVn1BPvNeJ 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.dwVRGQ59jr 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.mVn1BPvNeJ 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dwVRGQ59jr 00:28:24.587 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:28:24.846 17:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:28:25.413 [2024-07-22 17:06:26.766322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:25.413 17:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.mVn1BPvNeJ 00:28:25.413 17:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mVn1BPvNeJ 00:28:25.413 17:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:25.671 [2024-07-22 17:06:27.105937] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.671 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:28:25.929 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:28:26.188 [2024-07-22 17:06:27.553969] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:26.188 [2024-07-22 17:06:27.554271] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:28:26.446 malloc0 00:28:26.446 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:26.446 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mVn1BPvNeJ 00:28:26.705 [2024-07-22 17:06:28.238348] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:26.705 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.mVn1BPvNeJ 00:28:38.933 Initializing NVMe Controllers 00:28:38.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:38.933 Initialization complete. Launching workers. 00:28:38.933 ======================================================== 00:28:38.933 Latency(us) 00:28:38.933 Device Information : IOPS MiB/s Average min max 00:28:38.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9183.29 35.87 6970.58 2834.77 11576.16 00:28:38.933 ======================================================== 00:28:38.933 Total : 9183.29 35.87 6970.58 2834.77 11576.16 00:28:38.933 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mVn1BPvNeJ 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mVn1BPvNeJ' 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77709 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77709 /var/tmp/bdevperf.sock 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77709 ']' 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:38.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.933 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:38.933 [2024-07-22 17:06:38.736205] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:38.933 [2024-07-22 17:06:38.736443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77709 ] 00:28:38.933 [2024-07-22 17:06:38.922988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.933 [2024-07-22 17:06:39.248669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.933 [2024-07-22 17:06:39.523811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:38.933 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.933 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:28:38.934 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mVn1BPvNeJ 00:28:38.934 [2024-07-22 17:06:39.958293] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:38.934 [2024-07-22 17:06:39.958458] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:38.934 TLSTESTn1 00:28:38.934 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:28:38.934 Running I/O for 10 seconds... 00:28:48.918 00:28:48.918 Latency(us) 00:28:48.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.918 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:48.918 Verification LBA range: start 0x0 length 0x2000 00:28:48.918 TLSTESTn1 : 10.02 3595.49 14.04 0.00 0.00 35535.42 7177.75 28711.01 00:28:48.918 =================================================================================================================== 00:28:48.918 Total : 3595.49 14.04 0.00 0.00 35535.42 7177.75 28711.01 00:28:48.918 0 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 77709 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77709 ']' 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77709 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77709 00:28:48.918 killing process with pid 77709 00:28:48.918 Received shutdown signal, test time was about 10.000000 seconds 00:28:48.918 00:28:48.918 Latency(us) 00:28:48.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.918 =================================================================================================================== 00:28:48.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77709' 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77709 00:28:48.918 [2024-07-22 17:06:50.254115] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:48.918 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77709 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dwVRGQ59jr 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dwVRGQ59jr 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dwVRGQ59jr 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dwVRGQ59jr' 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77855 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77855 /var/tmp/bdevperf.sock 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77855 ']' 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.303 17:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:50.303 [2024-07-22 17:06:51.887523] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:50.303 [2024-07-22 17:06:51.887704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77855 ] 00:28:50.560 [2024-07-22 17:06:52.066201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.817 [2024-07-22 17:06:52.379186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.073 [2024-07-22 17:06:52.649258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:51.330 17:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.330 17:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:28:51.330 17:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dwVRGQ59jr 00:28:51.587 [2024-07-22 17:06:53.002056] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:51.587 [2024-07-22 17:06:53.002298] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:51.587 [2024-07-22 17:06:53.012720] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:51.587 [2024-07-22 17:06:53.013332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:28:51.587 [2024-07-22 17:06:53.014292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:51.587 [2024-07-22 17:06:53.015295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.587 [2024-07-22 17:06:53.015341] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:28:51.587 [2024-07-22 17:06:53.015372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.587 request: 00:28:51.587 { 00:28:51.587 "name": "TLSTEST", 00:28:51.587 "trtype": "tcp", 00:28:51.587 "traddr": "10.0.0.2", 00:28:51.587 "adrfam": "ipv4", 00:28:51.587 "trsvcid": "4420", 00:28:51.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.587 "prchk_reftag": false, 00:28:51.587 "prchk_guard": false, 00:28:51.587 "hdgst": false, 00:28:51.587 "ddgst": false, 00:28:51.587 "psk": "/tmp/tmp.dwVRGQ59jr", 00:28:51.587 "method": "bdev_nvme_attach_controller", 00:28:51.587 "req_id": 1 00:28:51.587 } 00:28:51.587 Got JSON-RPC error response 00:28:51.587 response: 00:28:51.587 { 00:28:51.587 "code": -5, 00:28:51.587 "message": "Input/output error" 00:28:51.587 } 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77855 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77855 ']' 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77855 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77855 00:28:51.587 killing process with pid 77855 00:28:51.587 Received shutdown signal, test time was about 10.000000 seconds 00:28:51.587 00:28:51.587 Latency(us) 00:28:51.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.587 =================================================================================================================== 00:28:51.587 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77855' 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77855 00:28:51.587 17:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77855 00:28:51.587 [2024-07-22 17:06:53.059175] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mVn1BPvNeJ 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mVn1BPvNeJ 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mVn1BPvNeJ 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mVn1BPvNeJ' 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77889 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77889 /var/tmp/bdevperf.sock 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77889 ']' 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:52.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.965 17:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:52.965 [2024-07-22 17:06:54.457046] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:52.965 [2024-07-22 17:06:54.457229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77889 ] 00:28:53.223 [2024-07-22 17:06:54.639157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.480 [2024-07-22 17:06:54.894048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.738 [2024-07-22 17:06:55.174607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:53.996 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.996 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:28:53.996 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.mVn1BPvNeJ 00:28:54.254 [2024-07-22 17:06:55.647014] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:54.254 [2024-07-22 17:06:55.647177] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:54.254 [2024-07-22 17:06:55.656532] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:28:54.254 [2024-07-22 17:06:55.656598] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:28:54.254 [2024-07-22 17:06:55.656683] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:54.254 [2024-07-22 17:06:55.657648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:28:54.254 [2024-07-22 17:06:55.658616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:54.254 [2024-07-22 17:06:55.659602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.254 [2024-07-22 17:06:55.659644] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:28:54.254 [2024-07-22 17:06:55.659663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.254 request: 00:28:54.254 { 00:28:54.254 "name": "TLSTEST", 00:28:54.254 "trtype": "tcp", 00:28:54.254 "traddr": "10.0.0.2", 00:28:54.254 "adrfam": "ipv4", 00:28:54.254 "trsvcid": "4420", 00:28:54.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.254 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.254 "prchk_reftag": false, 00:28:54.254 "prchk_guard": false, 00:28:54.254 "hdgst": false, 00:28:54.254 "ddgst": false, 00:28:54.254 "psk": "/tmp/tmp.mVn1BPvNeJ", 00:28:54.254 "method": "bdev_nvme_attach_controller", 00:28:54.254 "req_id": 1 00:28:54.254 } 00:28:54.254 Got JSON-RPC error response 00:28:54.254 response: 00:28:54.254 { 00:28:54.254 "code": -5, 00:28:54.254 "message": "Input/output error" 00:28:54.254 } 00:28:54.254 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77889 00:28:54.254 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77889 ']' 00:28:54.254 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77889 00:28:54.254 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:28:54.255 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:54.255 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77889 00:28:54.255 killing process with pid 77889 00:28:54.255 Received shutdown signal, test time was about 10.000000 seconds 00:28:54.255 00:28:54.255 Latency(us) 00:28:54.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.255 =================================================================================================================== 00:28:54.255 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:54.255 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:54.255 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:54.255 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77889' 00:28:54.255 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77889 00:28:54.255 [2024-07-22 17:06:55.710305] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:54.255 17:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77889 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mVn1BPvNeJ 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mVn1BPvNeJ 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.628 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:28:55.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mVn1BPvNeJ 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mVn1BPvNeJ' 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77934 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77934 /var/tmp/bdevperf.sock 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77934 ']' 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.888 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:55.888 [2024-07-22 17:06:57.363372] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:55.888 [2024-07-22 17:06:57.363773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77934 ] 00:28:56.146 [2024-07-22 17:06:57.532017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.418 [2024-07-22 17:06:57.798467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.676 [2024-07-22 17:06:58.074188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:56.676 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.676 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:28:56.676 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mVn1BPvNeJ 00:28:56.935 [2024-07-22 17:06:58.524109] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:56.935 [2024-07-22 17:06:58.524345] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:56.935 [2024-07-22 17:06:58.536738] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:28:56.935 [2024-07-22 17:06:58.536806] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:28:56.935 [2024-07-22 17:06:58.536879] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:56.935 [2024-07-22 17:06:58.537840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:28:56.935 [2024-07-22 17:06:58.538794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:56.935 [2024-07-22 17:06:58.539782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:56.935 [2024-07-22 17:06:58.539854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:28:56.935 [2024-07-22 17:06:58.539882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:56.935 request: 00:28:56.935 { 00:28:56.935 "name": "TLSTEST", 00:28:56.935 "trtype": "tcp", 00:28:56.935 "traddr": "10.0.0.2", 00:28:56.935 "adrfam": "ipv4", 00:28:56.935 "trsvcid": "4420", 00:28:56.935 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:56.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:56.935 "prchk_reftag": false, 00:28:56.935 "prchk_guard": false, 00:28:56.935 "hdgst": false, 00:28:56.935 "ddgst": false, 00:28:56.935 "psk": "/tmp/tmp.mVn1BPvNeJ", 00:28:56.935 "method": "bdev_nvme_attach_controller", 00:28:56.935 "req_id": 1 00:28:56.935 } 00:28:56.935 Got JSON-RPC error response 00:28:56.935 response: 00:28:56.935 { 00:28:56.935 "code": -5, 00:28:56.935 "message": "Input/output error" 00:28:56.935 } 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77934 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77934 ']' 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77934 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77934 00:28:57.195 killing process with pid 77934 00:28:57.195 Received shutdown signal, test time was about 10.000000 seconds 00:28:57.195 00:28:57.195 Latency(us) 00:28:57.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.195 =================================================================================================================== 00:28:57.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77934' 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77934 00:28:57.195 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77934 00:28:57.195 [2024-07-22 17:06:58.596593] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:28:58.572 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77974 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77974 /var/tmp/bdevperf.sock 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77974 ']' 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:58.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.572 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:58.572 [2024-07-22 17:07:00.151976] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:58.572 [2024-07-22 17:07:00.152217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77974 ] 00:28:58.850 [2024-07-22 17:07:00.350578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.124 [2024-07-22 17:07:00.627045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.382 [2024-07-22 17:07:00.911129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:59.640 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.640 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:28:59.640 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:28:59.899 [2024-07-22 17:07:01.364927] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:59.899 [2024-07-22 17:07:01.366571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:59.899 [2024-07-22 17:07:01.367549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.899 [2024-07-22 17:07:01.367601] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:28:59.899 [2024-07-22 17:07:01.367621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.899 request: 00:28:59.899 { 00:28:59.899 "name": "TLSTEST", 00:28:59.899 "trtype": "tcp", 00:28:59.899 "traddr": "10.0.0.2", 00:28:59.899 "adrfam": "ipv4", 00:28:59.899 "trsvcid": "4420", 00:28:59.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.899 "prchk_reftag": false, 00:28:59.899 "prchk_guard": false, 00:28:59.899 "hdgst": false, 00:28:59.899 "ddgst": false, 00:28:59.899 "method": "bdev_nvme_attach_controller", 00:28:59.899 "req_id": 1 00:28:59.899 } 00:28:59.899 Got JSON-RPC error response 00:28:59.899 response: 00:28:59.899 { 00:28:59.899 "code": -5, 00:28:59.899 "message": "Input/output error" 00:28:59.899 } 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77974 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77974 ']' 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77974 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77974 00:28:59.899 killing process with pid 77974 00:28:59.899 Received shutdown signal, test time was about 10.000000 seconds 00:28:59.899 00:28:59.899 Latency(us) 00:28:59.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.899 =================================================================================================================== 00:28:59.899 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77974' 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77974 00:28:59.899 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77974 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 77470 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77470 ']' 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77470 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77470 00:29:01.799 killing process with pid 77470 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77470' 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77470 00:29:01.799 [2024-07-22 17:07:03.000913] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:01.799 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77470 00:29:03.175 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:29:03.175 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:29:03.175 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.175 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:03.175 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:29:03.175 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:29:03.175 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9FRkMzcmOC 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9FRkMzcmOC 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78041 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78041 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78041 ']' 00:29:03.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:03.434 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:03.434 [2024-07-22 17:07:04.992344] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:03.434 [2024-07-22 17:07:04.992516] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.693 [2024-07-22 17:07:05.176988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.960 [2024-07-22 17:07:05.462981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.961 [2024-07-22 17:07:05.463057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.961 [2024-07-22 17:07:05.463073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.961 [2024-07-22 17:07:05.463090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.961 [2024-07-22 17:07:05.463102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.961 [2024-07-22 17:07:05.463161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.235 [2024-07-22 17:07:05.747775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9FRkMzcmOC 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9FRkMzcmOC 00:29:04.493 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:04.752 [2024-07-22 17:07:06.217817] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.752 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:05.011 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:05.270 [2024-07-22 17:07:06.718004] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:05.270 [2024-07-22 17:07:06.718325] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.270 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:05.528 malloc0 00:29:05.528 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:05.787 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9FRkMzcmOC 00:29:06.046 [2024-07-22 17:07:07.569990] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9FRkMzcmOC 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9FRkMzcmOC' 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=78096 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 78096 /var/tmp/bdevperf.sock 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78096 ']' 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.046 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:06.305 [2024-07-22 17:07:07.683262] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:06.305 [2024-07-22 17:07:07.683649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78096 ] 00:29:06.305 [2024-07-22 17:07:07.855617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.871 [2024-07-22 17:07:08.197662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.871 [2024-07-22 17:07:08.478727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:07.130 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.130 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:07.130 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9FRkMzcmOC 00:29:07.388 [2024-07-22 17:07:08.861162] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:07.388 [2024-07-22 17:07:08.861406] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:07.388 TLSTESTn1 00:29:07.388 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:07.647 Running I/O for 10 seconds... 00:29:17.621 00:29:17.621 Latency(us) 00:29:17.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.621 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:17.621 Verification LBA range: start 0x0 length 0x2000 00:29:17.621 TLSTESTn1 : 10.02 3530.90 13.79 0.00 0.00 36190.06 4681.14 34453.21 00:29:17.621 =================================================================================================================== 00:29:17.621 Total : 3530.90 13.79 0.00 0.00 36190.06 4681.14 34453.21 00:29:17.621 0 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 78096 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78096 ']' 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78096 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78096 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78096' 00:29:17.621 killing process with pid 78096 00:29:17.621 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78096 00:29:17.621 Received shutdown signal, test time was about 10.000000 seconds 00:29:17.621 00:29:17.621 Latency(us) 00:29:17.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.622 =================================================================================================================== 00:29:17.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.622 [2024-07-22 17:07:19.171407] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 17:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78096 00:29:17.622 scheduled for removal in v24.09 hit 1 times 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9FRkMzcmOC 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9FRkMzcmOC 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9FRkMzcmOC 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9FRkMzcmOC 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9FRkMzcmOC' 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=78243 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 78243 /var/tmp/bdevperf.sock 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78243 ']' 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:18.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:18.995 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:19.254 [2024-07-22 17:07:20.716364] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:19.254 [2024-07-22 17:07:20.716981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78243 ] 00:29:19.511 [2024-07-22 17:07:20.906065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.769 [2024-07-22 17:07:21.214176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.027 [2024-07-22 17:07:21.493812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9FRkMzcmOC 00:29:20.287 [2024-07-22 17:07:21.875803] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:20.287 [2024-07-22 17:07:21.875901] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:29:20.287 [2024-07-22 17:07:21.875917] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9FRkMzcmOC 00:29:20.287 request: 00:29:20.287 { 00:29:20.287 "name": "TLSTEST", 00:29:20.287 "trtype": "tcp", 00:29:20.287 "traddr": "10.0.0.2", 00:29:20.287 "adrfam": "ipv4", 00:29:20.287 "trsvcid": "4420", 00:29:20.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:20.287 "prchk_reftag": false, 00:29:20.287 "prchk_guard": false, 00:29:20.287 "hdgst": false, 00:29:20.287 "ddgst": false, 00:29:20.287 "psk": "/tmp/tmp.9FRkMzcmOC", 00:29:20.287 "method": "bdev_nvme_attach_controller", 00:29:20.287 "req_id": 1 00:29:20.287 } 00:29:20.287 Got JSON-RPC error response 00:29:20.287 response: 00:29:20.287 { 00:29:20.287 "code": -1, 00:29:20.287 "message": "Operation not permitted" 00:29:20.287 } 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 78243 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78243 ']' 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78243 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:20.287 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.545 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78243 00:29:20.545 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:20.545 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:20.545 killing process with pid 78243 00:29:20.545 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78243' 00:29:20.545 Received shutdown signal, test time was about 10.000000 seconds 00:29:20.545 00:29:20.545 Latency(us) 00:29:20.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.545 =================================================================================================================== 00:29:20.545 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:20.545 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78243 00:29:20.545 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78243 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 78041 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78041 ']' 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78041 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78041 00:29:21.922 killing process with pid 78041 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78041' 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78041 00:29:21.922 [2024-07-22 17:07:23.501686] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:21.922 17:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78041 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78307 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78307 00:29:23.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78307 ']' 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:23.824 17:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:23.824 [2024-07-22 17:07:25.416817] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:23.824 [2024-07-22 17:07:25.417001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.094 [2024-07-22 17:07:25.605684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.361 [2024-07-22 17:07:25.860557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.361 [2024-07-22 17:07:25.860625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.361 [2024-07-22 17:07:25.860641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.361 [2024-07-22 17:07:25.860656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.361 [2024-07-22 17:07:25.860667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.361 [2024-07-22 17:07:25.860721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.635 [2024-07-22 17:07:26.136817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9FRkMzcmOC 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9FRkMzcmOC 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.9FRkMzcmOC 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9FRkMzcmOC 00:29:24.894 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:25.152 [2024-07-22 17:07:26.574115] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.152 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:25.410 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:25.668 [2024-07-22 17:07:27.118316] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:25.668 [2024-07-22 17:07:27.118607] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.668 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:25.926 malloc0 00:29:25.926 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:26.184 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9FRkMzcmOC 00:29:26.443 [2024-07-22 17:07:27.947007] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:29:26.443 [2024-07-22 17:07:27.947079] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:29:26.443 [2024-07-22 17:07:27.947118] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:29:26.443 request: 00:29:26.443 { 00:29:26.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.443 "host": "nqn.2016-06.io.spdk:host1", 00:29:26.443 "psk": "/tmp/tmp.9FRkMzcmOC", 00:29:26.443 "method": "nvmf_subsystem_add_host", 00:29:26.443 "req_id": 1 00:29:26.443 } 00:29:26.443 Got JSON-RPC error response 00:29:26.443 response: 00:29:26.443 { 00:29:26.443 "code": -32603, 00:29:26.443 "message": "Internal error" 00:29:26.443 } 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 78307 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78307 ']' 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78307 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.443 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78307 00:29:26.443 killing process with pid 78307 00:29:26.443 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:26.443 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:26.443 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78307' 00:29:26.443 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78307 00:29:26.443 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78307 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9FRkMzcmOC 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:28.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78388 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78388 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78388 ']' 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.345 17:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:28.345 [2024-07-22 17:07:29.839284] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:28.345 [2024-07-22 17:07:29.839475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.602 [2024-07-22 17:07:30.038411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.860 [2024-07-22 17:07:30.402773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.860 [2024-07-22 17:07:30.402864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.860 [2024-07-22 17:07:30.402889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.860 [2024-07-22 17:07:30.402916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.860 [2024-07-22 17:07:30.402936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.860 [2024-07-22 17:07:30.403018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.117 [2024-07-22 17:07:30.691143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:29.374 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.374 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:29.374 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:29.374 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.374 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:29.374 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.374 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9FRkMzcmOC 00:29:29.375 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9FRkMzcmOC 00:29:29.375 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:29.632 [2024-07-22 17:07:31.175754] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.632 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:29.889 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:30.147 [2024-07-22 17:07:31.747942] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:30.147 [2024-07-22 17:07:31.748226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.406 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:30.664 malloc0 00:29:30.664 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:30.922 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9FRkMzcmOC 00:29:31.183 [2024-07-22 17:07:32.601245] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=78443 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 78443 /var/tmp/bdevperf.sock 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78443 ']' 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:31.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.183 17:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:31.183 [2024-07-22 17:07:32.716348] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:31.183 [2024-07-22 17:07:32.716483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78443 ] 00:29:31.441 [2024-07-22 17:07:32.882606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.700 [2024-07-22 17:07:33.163762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.957 [2024-07-22 17:07:33.441478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:32.215 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.215 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:32.215 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9FRkMzcmOC 00:29:32.474 [2024-07-22 17:07:33.874585] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:32.474 [2024-07-22 17:07:33.874774] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:32.474 TLSTESTn1 00:29:32.474 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:32.733 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:29:32.733 "subsystems": [ 00:29:32.733 { 00:29:32.733 "subsystem": "keyring", 00:29:32.733 "config": [] 00:29:32.733 }, 00:29:32.733 { 00:29:32.733 "subsystem": "iobuf", 00:29:32.733 "config": [ 00:29:32.733 { 00:29:32.733 "method": "iobuf_set_options", 00:29:32.733 "params": { 00:29:32.733 "small_pool_count": 8192, 00:29:32.733 "large_pool_count": 1024, 00:29:32.733 "small_bufsize": 8192, 00:29:32.733 "large_bufsize": 135168 00:29:32.733 } 00:29:32.733 } 00:29:32.733 ] 00:29:32.733 }, 00:29:32.733 { 00:29:32.733 "subsystem": "sock", 00:29:32.733 "config": [ 00:29:32.733 { 00:29:32.733 "method": "sock_set_default_impl", 00:29:32.733 "params": { 00:29:32.733 "impl_name": "uring" 00:29:32.733 } 00:29:32.733 }, 00:29:32.733 { 00:29:32.733 "method": "sock_impl_set_options", 00:29:32.733 "params": { 00:29:32.733 "impl_name": "ssl", 00:29:32.733 "recv_buf_size": 4096, 00:29:32.733 "send_buf_size": 4096, 00:29:32.733 "enable_recv_pipe": true, 00:29:32.733 "enable_quickack": false, 00:29:32.733 "enable_placement_id": 0, 00:29:32.733 "enable_zerocopy_send_server": true, 00:29:32.733 "enable_zerocopy_send_client": false, 00:29:32.733 "zerocopy_threshold": 0, 00:29:32.733 "tls_version": 0, 00:29:32.733 "enable_ktls": false 00:29:32.733 } 00:29:32.733 }, 00:29:32.733 { 00:29:32.733 "method": "sock_impl_set_options", 00:29:32.734 "params": { 00:29:32.734 "impl_name": "posix", 00:29:32.734 "recv_buf_size": 2097152, 00:29:32.734 "send_buf_size": 2097152, 00:29:32.734 "enable_recv_pipe": true, 00:29:32.734 "enable_quickack": false, 00:29:32.734 "enable_placement_id": 0, 00:29:32.734 "enable_zerocopy_send_server": true, 00:29:32.734 "enable_zerocopy_send_client": false, 00:29:32.734 "zerocopy_threshold": 0, 00:29:32.734 "tls_version": 0, 00:29:32.734 "enable_ktls": false 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "sock_impl_set_options", 00:29:32.734 "params": { 00:29:32.734 "impl_name": "uring", 00:29:32.734 "recv_buf_size": 2097152, 00:29:32.734 "send_buf_size": 2097152, 00:29:32.734 "enable_recv_pipe": true, 00:29:32.734 "enable_quickack": false, 00:29:32.734 "enable_placement_id": 0, 00:29:32.734 "enable_zerocopy_send_server": false, 00:29:32.734 "enable_zerocopy_send_client": false, 00:29:32.734 "zerocopy_threshold": 0, 00:29:32.734 "tls_version": 0, 00:29:32.734 "enable_ktls": false 00:29:32.734 } 00:29:32.734 } 00:29:32.734 ] 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "subsystem": "vmd", 00:29:32.734 "config": [] 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "subsystem": "accel", 00:29:32.734 "config": [ 00:29:32.734 { 00:29:32.734 "method": "accel_set_options", 00:29:32.734 "params": { 00:29:32.734 "small_cache_size": 128, 00:29:32.734 "large_cache_size": 16, 00:29:32.734 "task_count": 2048, 00:29:32.734 "sequence_count": 2048, 00:29:32.734 "buf_count": 2048 00:29:32.734 } 00:29:32.734 } 00:29:32.734 ] 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "subsystem": "bdev", 00:29:32.734 "config": [ 00:29:32.734 { 00:29:32.734 "method": "bdev_set_options", 00:29:32.734 "params": { 00:29:32.734 "bdev_io_pool_size": 65535, 00:29:32.734 "bdev_io_cache_size": 256, 00:29:32.734 "bdev_auto_examine": true, 00:29:32.734 "iobuf_small_cache_size": 128, 00:29:32.734 "iobuf_large_cache_size": 16 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "bdev_raid_set_options", 00:29:32.734 "params": { 00:29:32.734 "process_window_size_kb": 1024, 00:29:32.734 "process_max_bandwidth_mb_sec": 0 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "bdev_iscsi_set_options", 00:29:32.734 "params": { 00:29:32.734 "timeout_sec": 30 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "bdev_nvme_set_options", 00:29:32.734 "params": { 00:29:32.734 "action_on_timeout": "none", 00:29:32.734 "timeout_us": 0, 00:29:32.734 "timeout_admin_us": 0, 00:29:32.734 "keep_alive_timeout_ms": 10000, 00:29:32.734 "arbitration_burst": 0, 00:29:32.734 "low_priority_weight": 0, 00:29:32.734 "medium_priority_weight": 0, 00:29:32.734 "high_priority_weight": 0, 00:29:32.734 "nvme_adminq_poll_period_us": 10000, 00:29:32.734 "nvme_ioq_poll_period_us": 0, 00:29:32.734 "io_queue_requests": 0, 00:29:32.734 "delay_cmd_submit": true, 00:29:32.734 "transport_retry_count": 4, 00:29:32.734 "bdev_retry_count": 3, 00:29:32.734 "transport_ack_timeout": 0, 00:29:32.734 "ctrlr_loss_timeout_sec": 0, 00:29:32.734 "reconnect_delay_sec": 0, 00:29:32.734 "fast_io_fail_timeout_sec": 0, 00:29:32.734 "disable_auto_failback": false, 00:29:32.734 "generate_uuids": false, 00:29:32.734 "transport_tos": 0, 00:29:32.734 "nvme_error_stat": false, 00:29:32.734 "rdma_srq_size": 0, 00:29:32.734 "io_path_stat": false, 00:29:32.734 "allow_accel_sequence": false, 00:29:32.734 "rdma_max_cq_size": 0, 00:29:32.734 "rdma_cm_event_timeout_ms": 0, 00:29:32.734 "dhchap_digests": [ 00:29:32.734 "sha256", 00:29:32.734 "sha384", 00:29:32.734 "sha512" 00:29:32.734 ], 00:29:32.734 "dhchap_dhgroups": [ 00:29:32.734 "null", 00:29:32.734 "ffdhe2048", 00:29:32.734 "ffdhe3072", 00:29:32.734 "ffdhe4096", 00:29:32.734 "ffdhe6144", 00:29:32.734 "ffdhe8192" 00:29:32.734 ] 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "bdev_nvme_set_hotplug", 00:29:32.734 "params": { 00:29:32.734 "period_us": 100000, 00:29:32.734 "enable": false 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "bdev_malloc_create", 00:29:32.734 "params": { 00:29:32.734 "name": "malloc0", 00:29:32.734 "num_blocks": 8192, 00:29:32.734 "block_size": 4096, 00:29:32.734 "physical_block_size": 4096, 00:29:32.734 "uuid": "24f9a87f-26d4-4947-9a42-62605240818f", 00:29:32.734 "optimal_io_boundary": 0, 00:29:32.734 "md_size": 0, 00:29:32.734 "dif_type": 0, 00:29:32.734 "dif_is_head_of_md": false, 00:29:32.734 "dif_pi_format": 0 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "bdev_wait_for_examine" 00:29:32.734 } 00:29:32.734 ] 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "subsystem": "nbd", 00:29:32.734 "config": [] 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "subsystem": "scheduler", 00:29:32.734 "config": [ 00:29:32.734 { 00:29:32.734 "method": "framework_set_scheduler", 00:29:32.734 "params": { 00:29:32.734 "name": "static" 00:29:32.734 } 00:29:32.734 } 00:29:32.734 ] 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "subsystem": "nvmf", 00:29:32.734 "config": [ 00:29:32.734 { 00:29:32.734 "method": "nvmf_set_config", 00:29:32.734 "params": { 00:29:32.734 "discovery_filter": "match_any", 00:29:32.734 "admin_cmd_passthru": { 00:29:32.734 "identify_ctrlr": false 00:29:32.734 } 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "nvmf_set_max_subsystems", 00:29:32.734 "params": { 00:29:32.734 "max_subsystems": 1024 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "nvmf_set_crdt", 00:29:32.734 "params": { 00:29:32.734 "crdt1": 0, 00:29:32.734 "crdt2": 0, 00:29:32.734 "crdt3": 0 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "nvmf_create_transport", 00:29:32.734 "params": { 00:29:32.734 "trtype": "TCP", 00:29:32.734 "max_queue_depth": 128, 00:29:32.734 "max_io_qpairs_per_ctrlr": 127, 00:29:32.734 "in_capsule_data_size": 4096, 00:29:32.734 "max_io_size": 131072, 00:29:32.734 "io_unit_size": 131072, 00:29:32.734 "max_aq_depth": 128, 00:29:32.734 "num_shared_buffers": 511, 00:29:32.734 "buf_cache_size": 4294967295, 00:29:32.734 "dif_insert_or_strip": false, 00:29:32.734 "zcopy": false, 00:29:32.734 "c2h_success": false, 00:29:32.734 "sock_priority": 0, 00:29:32.734 "abort_timeout_sec": 1, 00:29:32.734 "ack_timeout": 0, 00:29:32.734 "data_wr_pool_size": 0 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "nvmf_create_subsystem", 00:29:32.734 "params": { 00:29:32.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.734 "allow_any_host": false, 00:29:32.734 "serial_number": "SPDK00000000000001", 00:29:32.734 "model_number": "SPDK bdev Controller", 00:29:32.734 "max_namespaces": 10, 00:29:32.734 "min_cntlid": 1, 00:29:32.734 "max_cntlid": 65519, 00:29:32.734 "ana_reporting": false 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "nvmf_subsystem_add_host", 00:29:32.734 "params": { 00:29:32.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.734 "host": "nqn.2016-06.io.spdk:host1", 00:29:32.734 "psk": "/tmp/tmp.9FRkMzcmOC" 00:29:32.734 } 00:29:32.734 }, 00:29:32.734 { 00:29:32.734 "method": "nvmf_subsystem_add_ns", 00:29:32.734 "params": { 00:29:32.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.735 "namespace": { 00:29:32.735 "nsid": 1, 00:29:32.735 "bdev_name": "malloc0", 00:29:32.735 "nguid": "24F9A87F26D449479A4262605240818F", 00:29:32.735 "uuid": "24f9a87f-26d4-4947-9a42-62605240818f", 00:29:32.735 "no_auto_visible": false 00:29:32.735 } 00:29:32.735 } 00:29:32.735 }, 00:29:32.735 { 00:29:32.735 "method": "nvmf_subsystem_add_listener", 00:29:32.735 "params": { 00:29:32.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.735 "listen_address": { 00:29:32.735 "trtype": "TCP", 00:29:32.735 "adrfam": "IPv4", 00:29:32.735 "traddr": "10.0.0.2", 00:29:32.735 "trsvcid": "4420" 00:29:32.735 }, 00:29:32.735 "secure_channel": true 00:29:32.735 } 00:29:32.735 } 00:29:32.735 ] 00:29:32.735 } 00:29:32.735 ] 00:29:32.735 }' 00:29:32.735 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:29:32.994 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:29:32.994 "subsystems": [ 00:29:32.994 { 00:29:32.994 "subsystem": "keyring", 00:29:32.994 "config": [] 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "subsystem": "iobuf", 00:29:32.994 "config": [ 00:29:32.994 { 00:29:32.994 "method": "iobuf_set_options", 00:29:32.994 "params": { 00:29:32.994 "small_pool_count": 8192, 00:29:32.994 "large_pool_count": 1024, 00:29:32.994 "small_bufsize": 8192, 00:29:32.994 "large_bufsize": 135168 00:29:32.994 } 00:29:32.994 } 00:29:32.994 ] 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "subsystem": "sock", 00:29:32.994 "config": [ 00:29:32.994 { 00:29:32.994 "method": "sock_set_default_impl", 00:29:32.994 "params": { 00:29:32.994 "impl_name": "uring" 00:29:32.994 } 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "method": "sock_impl_set_options", 00:29:32.994 "params": { 00:29:32.994 "impl_name": "ssl", 00:29:32.994 "recv_buf_size": 4096, 00:29:32.994 "send_buf_size": 4096, 00:29:32.994 "enable_recv_pipe": true, 00:29:32.994 "enable_quickack": false, 00:29:32.994 "enable_placement_id": 0, 00:29:32.994 "enable_zerocopy_send_server": true, 00:29:32.994 "enable_zerocopy_send_client": false, 00:29:32.994 "zerocopy_threshold": 0, 00:29:32.994 "tls_version": 0, 00:29:32.994 "enable_ktls": false 00:29:32.994 } 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "method": "sock_impl_set_options", 00:29:32.994 "params": { 00:29:32.994 "impl_name": "posix", 00:29:32.994 "recv_buf_size": 2097152, 00:29:32.994 "send_buf_size": 2097152, 00:29:32.994 "enable_recv_pipe": true, 00:29:32.994 "enable_quickack": false, 00:29:32.994 "enable_placement_id": 0, 00:29:32.994 "enable_zerocopy_send_server": true, 00:29:32.994 "enable_zerocopy_send_client": false, 00:29:32.994 "zerocopy_threshold": 0, 00:29:32.994 "tls_version": 0, 00:29:32.994 "enable_ktls": false 00:29:32.994 } 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "method": "sock_impl_set_options", 00:29:32.994 "params": { 00:29:32.994 "impl_name": "uring", 00:29:32.994 "recv_buf_size": 2097152, 00:29:32.994 "send_buf_size": 2097152, 00:29:32.994 "enable_recv_pipe": true, 00:29:32.994 "enable_quickack": false, 00:29:32.994 "enable_placement_id": 0, 00:29:32.994 "enable_zerocopy_send_server": false, 00:29:32.994 "enable_zerocopy_send_client": false, 00:29:32.994 "zerocopy_threshold": 0, 00:29:32.994 "tls_version": 0, 00:29:32.994 "enable_ktls": false 00:29:32.994 } 00:29:32.994 } 00:29:32.994 ] 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "subsystem": "vmd", 00:29:32.994 "config": [] 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "subsystem": "accel", 00:29:32.994 "config": [ 00:29:32.994 { 00:29:32.994 "method": "accel_set_options", 00:29:32.994 "params": { 00:29:32.994 "small_cache_size": 128, 00:29:32.994 "large_cache_size": 16, 00:29:32.994 "task_count": 2048, 00:29:32.994 "sequence_count": 2048, 00:29:32.994 "buf_count": 2048 00:29:32.994 } 00:29:32.994 } 00:29:32.994 ] 00:29:32.994 }, 00:29:32.994 { 00:29:32.994 "subsystem": "bdev", 00:29:32.995 "config": [ 00:29:32.995 { 00:29:32.995 "method": "bdev_set_options", 00:29:32.995 "params": { 00:29:32.995 "bdev_io_pool_size": 65535, 00:29:32.995 "bdev_io_cache_size": 256, 00:29:32.995 "bdev_auto_examine": true, 00:29:32.995 "iobuf_small_cache_size": 128, 00:29:32.995 "iobuf_large_cache_size": 16 00:29:32.995 } 00:29:32.995 }, 00:29:32.995 { 00:29:32.995 "method": "bdev_raid_set_options", 00:29:32.995 "params": { 00:29:32.995 "process_window_size_kb": 1024, 00:29:32.995 "process_max_bandwidth_mb_sec": 0 00:29:32.995 } 00:29:32.995 }, 00:29:32.995 { 00:29:32.995 "method": "bdev_iscsi_set_options", 00:29:32.995 "params": { 00:29:32.995 "timeout_sec": 30 00:29:32.995 } 00:29:32.995 }, 00:29:32.995 { 00:29:32.995 "method": "bdev_nvme_set_options", 00:29:32.995 "params": { 00:29:32.995 "action_on_timeout": "none", 00:29:32.995 "timeout_us": 0, 00:29:32.995 "timeout_admin_us": 0, 00:29:32.995 "keep_alive_timeout_ms": 10000, 00:29:32.995 "arbitration_burst": 0, 00:29:32.995 "low_priority_weight": 0, 00:29:32.995 "medium_priority_weight": 0, 00:29:32.995 "high_priority_weight": 0, 00:29:32.995 "nvme_adminq_poll_period_us": 10000, 00:29:32.995 "nvme_ioq_poll_period_us": 0, 00:29:32.995 "io_queue_requests": 512, 00:29:32.995 "delay_cmd_submit": true, 00:29:32.995 "transport_retry_count": 4, 00:29:32.995 "bdev_retry_count": 3, 00:29:32.995 "transport_ack_timeout": 0, 00:29:32.995 "ctrlr_loss_timeout_sec": 0, 00:29:32.995 "reconnect_delay_sec": 0, 00:29:32.995 "fast_io_fail_timeout_sec": 0, 00:29:32.995 "disable_auto_failback": false, 00:29:32.995 "generate_uuids": false, 00:29:32.995 "transport_tos": 0, 00:29:32.995 "nvme_error_stat": false, 00:29:32.995 "rdma_srq_size": 0, 00:29:32.995 "io_path_stat": false, 00:29:32.995 "allow_accel_sequence": false, 00:29:32.995 "rdma_max_cq_size": 0, 00:29:32.995 "rdma_cm_event_timeout_ms": 0, 00:29:32.995 "dhchap_digests": [ 00:29:32.995 "sha256", 00:29:32.995 "sha384", 00:29:32.995 "sha512" 00:29:32.995 ], 00:29:32.995 "dhchap_dhgroups": [ 00:29:32.995 "null", 00:29:32.995 "ffdhe2048", 00:29:32.995 "ffdhe3072", 00:29:32.995 "ffdhe4096", 00:29:32.995 "ffdhe6144", 00:29:32.995 "ffdhe8192" 00:29:32.995 ] 00:29:32.995 } 00:29:32.995 }, 00:29:32.995 { 00:29:32.995 "method": "bdev_nvme_attach_controller", 00:29:32.995 "params": { 00:29:32.995 "name": "TLSTEST", 00:29:32.995 "trtype": "TCP", 00:29:32.995 "adrfam": "IPv4", 00:29:32.995 "traddr": "10.0.0.2", 00:29:32.995 "trsvcid": "4420", 00:29:32.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.995 "prchk_reftag": false, 00:29:32.995 "prchk_guard": false, 00:29:32.995 "ctrlr_loss_timeout_sec": 0, 00:29:32.995 "reconnect_delay_sec": 0, 00:29:32.995 "fast_io_fail_timeout_sec": 0, 00:29:32.995 "psk": "/tmp/tmp.9FRkMzcmOC", 00:29:32.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.995 "hdgst": false, 00:29:32.995 "ddgst": false 00:29:32.995 } 00:29:32.995 }, 00:29:32.995 { 00:29:32.995 "method": "bdev_nvme_set_hotplug", 00:29:32.995 "params": { 00:29:32.995 "period_us": 100000, 00:29:32.995 "enable": false 00:29:32.995 } 00:29:32.995 }, 00:29:32.995 { 00:29:32.995 "method": "bdev_wait_for_examine" 00:29:32.995 } 00:29:32.995 ] 00:29:32.995 }, 00:29:32.995 { 00:29:32.995 "subsystem": "nbd", 00:29:32.995 "config": [] 00:29:32.995 } 00:29:32.995 ] 00:29:32.995 }' 00:29:32.995 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 78443 00:29:32.995 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78443 ']' 00:29:32.995 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78443 00:29:32.995 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:33.254 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:33.254 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78443 00:29:33.254 killing process with pid 78443 00:29:33.254 Received shutdown signal, test time was about 10.000000 seconds 00:29:33.254 00:29:33.254 Latency(us) 00:29:33.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.254 =================================================================================================================== 00:29:33.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:33.254 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:33.254 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:33.254 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78443' 00:29:33.254 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78443 00:29:33.254 [2024-07-22 17:07:34.640139] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:33.254 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78443 00:29:34.630 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 78388 00:29:34.630 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78388 ']' 00:29:34.630 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78388 00:29:34.630 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:34.630 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:34.630 17:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78388 00:29:34.630 killing process with pid 78388 00:29:34.630 17:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:34.630 17:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:34.630 17:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78388' 00:29:34.630 17:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78388 00:29:34.630 [2024-07-22 17:07:36.018193] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:34.630 17:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78388 00:29:36.131 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:29:36.131 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:36.131 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.131 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:36.131 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:29:36.131 "subsystems": [ 00:29:36.131 { 00:29:36.131 "subsystem": "keyring", 00:29:36.131 "config": [] 00:29:36.131 }, 00:29:36.131 { 00:29:36.131 "subsystem": "iobuf", 00:29:36.131 "config": [ 00:29:36.131 { 00:29:36.131 "method": "iobuf_set_options", 00:29:36.131 "params": { 00:29:36.131 "small_pool_count": 8192, 00:29:36.131 "large_pool_count": 1024, 00:29:36.131 "small_bufsize": 8192, 00:29:36.131 "large_bufsize": 135168 00:29:36.131 } 00:29:36.131 } 00:29:36.131 ] 00:29:36.131 }, 00:29:36.131 { 00:29:36.131 "subsystem": "sock", 00:29:36.131 "config": [ 00:29:36.131 { 00:29:36.131 "method": "sock_set_default_impl", 00:29:36.131 "params": { 00:29:36.131 "impl_name": "uring" 00:29:36.131 } 00:29:36.131 }, 00:29:36.131 { 00:29:36.131 "method": "sock_impl_set_options", 00:29:36.131 "params": { 00:29:36.131 "impl_name": "ssl", 00:29:36.131 "recv_buf_size": 4096, 00:29:36.131 "send_buf_size": 4096, 00:29:36.131 "enable_recv_pipe": true, 00:29:36.131 "enable_quickack": false, 00:29:36.131 "enable_placement_id": 0, 00:29:36.131 "enable_zerocopy_send_server": true, 00:29:36.131 "enable_zerocopy_send_client": false, 00:29:36.131 "zerocopy_threshold": 0, 00:29:36.131 "tls_version": 0, 00:29:36.131 "enable_ktls": false 00:29:36.131 } 00:29:36.131 }, 00:29:36.131 { 00:29:36.131 "method": "sock_impl_set_options", 00:29:36.131 "params": { 00:29:36.131 "impl_name": "posix", 00:29:36.131 "recv_buf_size": 2097152, 00:29:36.132 "send_buf_size": 2097152, 00:29:36.132 "enable_recv_pipe": true, 00:29:36.132 "enable_quickack": false, 00:29:36.132 "enable_placement_id": 0, 00:29:36.132 "enable_zerocopy_send_server": true, 00:29:36.132 "enable_zerocopy_send_client": false, 00:29:36.132 "zerocopy_threshold": 0, 00:29:36.132 "tls_version": 0, 00:29:36.132 "enable_ktls": false 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "sock_impl_set_options", 00:29:36.132 "params": { 00:29:36.132 "impl_name": "uring", 00:29:36.132 "recv_buf_size": 2097152, 00:29:36.132 "send_buf_size": 2097152, 00:29:36.132 "enable_recv_pipe": true, 00:29:36.132 "enable_quickack": false, 00:29:36.132 "enable_placement_id": 0, 00:29:36.132 "enable_zerocopy_send_server": false, 00:29:36.132 "enable_zerocopy_send_client": false, 00:29:36.132 "zerocopy_threshold": 0, 00:29:36.132 "tls_version": 0, 00:29:36.132 "enable_ktls": false 00:29:36.132 } 00:29:36.132 } 00:29:36.132 ] 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "subsystem": "vmd", 00:29:36.132 "config": [] 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "subsystem": "accel", 00:29:36.132 "config": [ 00:29:36.132 { 00:29:36.132 "method": "accel_set_options", 00:29:36.132 "params": { 00:29:36.132 "small_cache_size": 128, 00:29:36.132 "large_cache_size": 16, 00:29:36.132 "task_count": 2048, 00:29:36.132 "sequence_count": 2048, 00:29:36.132 "buf_count": 2048 00:29:36.132 } 00:29:36.132 } 00:29:36.132 ] 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "subsystem": "bdev", 00:29:36.132 "config": [ 00:29:36.132 { 00:29:36.132 "method": "bdev_set_options", 00:29:36.132 "params": { 00:29:36.132 "bdev_io_pool_size": 65535, 00:29:36.132 "bdev_io_cache_size": 256, 00:29:36.132 "bdev_auto_examine": true, 00:29:36.132 "iobuf_small_cache_size": 128, 00:29:36.132 "iobuf_large_cache_size": 16 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "bdev_raid_set_options", 00:29:36.132 "params": { 00:29:36.132 "process_window_size_kb": 1024, 00:29:36.132 "process_max_bandwidth_mb_sec": 0 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "bdev_iscsi_set_options", 00:29:36.132 "params": { 00:29:36.132 "timeout_sec": 30 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "bdev_nvme_set_options", 00:29:36.132 "params": { 00:29:36.132 "action_on_timeout": "none", 00:29:36.132 "timeout_us": 0, 00:29:36.132 "timeout_admin_us": 0, 00:29:36.132 "keep_alive_timeout_ms": 10000, 00:29:36.132 "arbitration_burst": 0, 00:29:36.132 "low_priority_weight": 0, 00:29:36.132 "medium_priority_weight": 0, 00:29:36.132 "high_priority_weight": 0, 00:29:36.132 "nvme_adminq_poll_period_us": 10000, 00:29:36.132 "nvme_ioq_poll_period_us": 0, 00:29:36.132 "io_queue_requests": 0, 00:29:36.132 "delay_cmd_submit": true, 00:29:36.132 "transport_retry_count": 4, 00:29:36.132 "bdev_retry_count": 3, 00:29:36.132 "transport_ack_timeout": 0, 00:29:36.132 "ctrlr_loss_timeout_sec": 0, 00:29:36.132 "reconnect_delay_sec": 0, 00:29:36.132 "fast_io_fail_timeout_sec": 0, 00:29:36.132 "disable_auto_failback": false, 00:29:36.132 "generate_uuids": false, 00:29:36.132 "transport_tos": 0, 00:29:36.132 "nvme_error_stat": false, 00:29:36.132 "rdma_srq_size": 0, 00:29:36.132 "io_path_stat": false, 00:29:36.132 "allow_accel_sequence": false, 00:29:36.132 "rdma_max_cq_size": 0, 00:29:36.132 "rdma_cm_event_timeout_ms": 0, 00:29:36.132 "dhchap_digests": [ 00:29:36.132 "sha256", 00:29:36.132 "sha384", 00:29:36.132 "sha512" 00:29:36.132 ], 00:29:36.132 "dhchap_dhgroups": [ 00:29:36.132 "null", 00:29:36.132 "ffdhe2048", 00:29:36.132 "ffdhe3072", 00:29:36.132 "ffdhe4096", 00:29:36.132 "ffdhe6144", 00:29:36.132 "ffdhe8192" 00:29:36.132 ] 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "bdev_nvme_set_hotplug", 00:29:36.132 "params": { 00:29:36.132 "period_us": 100000, 00:29:36.132 "enable": false 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "bdev_malloc_create", 00:29:36.132 "params": { 00:29:36.132 "name": "malloc0", 00:29:36.132 "num_blocks": 8192, 00:29:36.132 "block_size": 4096, 00:29:36.132 "physical_block_size": 4096, 00:29:36.132 "uuid": "24f9a87f-26d4-4947-9a42-62605240818f", 00:29:36.132 "optimal_io_boundary": 0, 00:29:36.132 "md_size": 0, 00:29:36.132 "dif_type": 0, 00:29:36.132 "dif_is_head_of_md": false, 00:29:36.132 "dif_pi_format": 0 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "bdev_wait_for_examine" 00:29:36.132 } 00:29:36.132 ] 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "subsystem": "nbd", 00:29:36.132 "config": [] 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "subsystem": "scheduler", 00:29:36.132 "config": [ 00:29:36.132 { 00:29:36.132 "method": "framework_set_scheduler", 00:29:36.132 "params": { 00:29:36.132 "name": "static" 00:29:36.132 } 00:29:36.132 } 00:29:36.132 ] 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "subsystem": "nvmf", 00:29:36.132 "config": [ 00:29:36.132 { 00:29:36.132 "method": "nvmf_set_config", 00:29:36.132 "params": { 00:29:36.132 "discovery_filter": "match_any", 00:29:36.132 "admin_cmd_passthru": { 00:29:36.132 "identify_ctrlr": false 00:29:36.132 } 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "nvmf_set_max_subsystems", 00:29:36.132 "params": { 00:29:36.132 "max_subsystems": 1024 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "nvmf_set_crdt", 00:29:36.132 "params": { 00:29:36.132 "crdt1": 0, 00:29:36.132 "crdt2": 0, 00:29:36.132 "crdt3": 0 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "nvmf_create_transport", 00:29:36.132 "params": { 00:29:36.132 "trtype": "TCP", 00:29:36.132 "max_queue_depth": 128, 00:29:36.132 "max_io_qpairs_per_ctrlr": 127, 00:29:36.132 "in_capsule_data_size": 4096, 00:29:36.132 "max_io_size": 131072, 00:29:36.132 "io_unit_size": 131072, 00:29:36.132 "max_aq_depth": 128, 00:29:36.132 "num_shared_buffers": 511, 00:29:36.132 "buf_cache_size": 4294967295, 00:29:36.132 "dif_insert_or_strip": false, 00:29:36.132 "zcopy": false, 00:29:36.132 "c2h_success": false, 00:29:36.132 "sock_priority": 0, 00:29:36.132 "abort_timeout_sec": 1, 00:29:36.132 "ack_timeout": 0, 00:29:36.132 "data_wr_pool_size": 0 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "nvmf_create_subsystem", 00:29:36.132 "params": { 00:29:36.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.132 "allow_any_host": false, 00:29:36.132 "serial_number": "SPDK00000000000001", 00:29:36.132 "model_number": "SPDK bdev Controller", 00:29:36.132 "max_namespaces": 10, 00:29:36.132 "min_cntlid": 1, 00:29:36.132 "max_cntlid": 65519, 00:29:36.132 "ana_reporting": false 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "nvmf_subsystem_add_host", 00:29:36.132 "params": { 00:29:36.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.132 "host": "nqn.2016-06.io.spdk:host1", 00:29:36.132 "psk": "/tmp/tmp.9FRkMzcmOC" 00:29:36.132 } 00:29:36.132 }, 00:29:36.132 { 00:29:36.132 "method": "nvmf_subsystem_add_ns", 00:29:36.132 "params": { 00:29:36.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.132 "namespace": { 00:29:36.132 "nsid": 1, 00:29:36.132 "bdev_name": "malloc0", 00:29:36.132 "nguid": "24F9A87F26D449479A4262605240818F", 00:29:36.132 "uuid": "24f9a87f-26d4-4947-9a42-62605240818f", 00:29:36.133 "no_auto_visible": false 00:29:36.133 } 00:29:36.133 } 00:29:36.133 }, 00:29:36.133 { 00:29:36.133 "method": "nvmf_subsystem_add_listener", 00:29:36.133 "params": { 00:29:36.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.133 "listen_address": { 00:29:36.133 "trtype": "TCP", 00:29:36.133 "adrfam": "IPv4", 00:29:36.133 "traddr": "10.0.0.2", 00:29:36.133 "trsvcid": "4420" 00:29:36.133 }, 00:29:36.133 "secure_channel": true 00:29:36.133 } 00:29:36.133 } 00:29:36.133 ] 00:29:36.133 } 00:29:36.133 ] 00:29:36.133 }' 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78516 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78516 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78516 ']' 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.133 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:36.133 [2024-07-22 17:07:37.722811] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:36.133 [2024-07-22 17:07:37.723005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.390 [2024-07-22 17:07:37.915484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.648 [2024-07-22 17:07:38.238289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.648 [2024-07-22 17:07:38.238357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.648 [2024-07-22 17:07:38.238372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.648 [2024-07-22 17:07:38.238404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.648 [2024-07-22 17:07:38.238416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.648 [2024-07-22 17:07:38.238570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.214 [2024-07-22 17:07:38.614887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:37.214 [2024-07-22 17:07:38.823500] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.474 [2024-07-22 17:07:38.846376] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:37.474 [2024-07-22 17:07:38.862367] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:37.474 [2024-07-22 17:07:38.862608] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:37.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=78554 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 78554 /var/tmp/bdevperf.sock 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78554 ']' 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:29:37.474 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:29:37.474 "subsystems": [ 00:29:37.474 { 00:29:37.474 "subsystem": "keyring", 00:29:37.474 "config": [] 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "subsystem": "iobuf", 00:29:37.474 "config": [ 00:29:37.474 { 00:29:37.474 "method": "iobuf_set_options", 00:29:37.474 "params": { 00:29:37.474 "small_pool_count": 8192, 00:29:37.474 "large_pool_count": 1024, 00:29:37.474 "small_bufsize": 8192, 00:29:37.474 "large_bufsize": 135168 00:29:37.474 } 00:29:37.474 } 00:29:37.474 ] 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "subsystem": "sock", 00:29:37.474 "config": [ 00:29:37.474 { 00:29:37.474 "method": "sock_set_default_impl", 00:29:37.474 "params": { 00:29:37.474 "impl_name": "uring" 00:29:37.474 } 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "method": "sock_impl_set_options", 00:29:37.474 "params": { 00:29:37.474 "impl_name": "ssl", 00:29:37.474 "recv_buf_size": 4096, 00:29:37.474 "send_buf_size": 4096, 00:29:37.474 "enable_recv_pipe": true, 00:29:37.474 "enable_quickack": false, 00:29:37.474 "enable_placement_id": 0, 00:29:37.474 "enable_zerocopy_send_server": true, 00:29:37.474 "enable_zerocopy_send_client": false, 00:29:37.474 "zerocopy_threshold": 0, 00:29:37.474 "tls_version": 0, 00:29:37.474 "enable_ktls": false 00:29:37.474 } 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "method": "sock_impl_set_options", 00:29:37.474 "params": { 00:29:37.474 "impl_name": "posix", 00:29:37.474 "recv_buf_size": 2097152, 00:29:37.474 "send_buf_size": 2097152, 00:29:37.474 "enable_recv_pipe": true, 00:29:37.474 "enable_quickack": false, 00:29:37.474 "enable_placement_id": 0, 00:29:37.474 "enable_zerocopy_send_server": true, 00:29:37.474 "enable_zerocopy_send_client": false, 00:29:37.474 "zerocopy_threshold": 0, 00:29:37.474 "tls_version": 0, 00:29:37.474 "enable_ktls": false 00:29:37.474 } 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "method": "sock_impl_set_options", 00:29:37.474 "params": { 00:29:37.474 "impl_name": "uring", 00:29:37.474 "recv_buf_size": 2097152, 00:29:37.474 "send_buf_size": 2097152, 00:29:37.474 "enable_recv_pipe": true, 00:29:37.474 "enable_quickack": false, 00:29:37.474 "enable_placement_id": 0, 00:29:37.474 "enable_zerocopy_send_server": false, 00:29:37.474 "enable_zerocopy_send_client": false, 00:29:37.474 "zerocopy_threshold": 0, 00:29:37.474 "tls_version": 0, 00:29:37.474 "enable_ktls": false 00:29:37.474 } 00:29:37.474 } 00:29:37.474 ] 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "subsystem": "vmd", 00:29:37.474 "config": [] 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "subsystem": "accel", 00:29:37.474 "config": [ 00:29:37.474 { 00:29:37.474 "method": "accel_set_options", 00:29:37.474 "params": { 00:29:37.474 "small_cache_size": 128, 00:29:37.474 "large_cache_size": 16, 00:29:37.474 "task_count": 2048, 00:29:37.474 "sequence_count": 2048, 00:29:37.474 "buf_count": 2048 00:29:37.474 } 00:29:37.474 } 00:29:37.474 ] 00:29:37.474 }, 00:29:37.474 { 00:29:37.474 "subsystem": "bdev", 00:29:37.474 "config": [ 00:29:37.474 { 00:29:37.474 "method": "bdev_set_options", 00:29:37.474 "params": { 00:29:37.474 "bdev_io_pool_size": 65535, 00:29:37.474 "bdev_io_cache_size": 256, 00:29:37.474 "bdev_auto_examine": true, 00:29:37.474 "iobuf_small_cache_size": 128, 00:29:37.474 "iobuf_large_cache_size": 16 00:29:37.475 } 00:29:37.475 }, 00:29:37.475 { 00:29:37.475 "method": "bdev_raid_set_options", 00:29:37.475 "params": { 00:29:37.475 "process_window_size_kb": 1024, 00:29:37.475 "process_max_bandwidth_mb_sec": 0 00:29:37.475 } 00:29:37.475 }, 00:29:37.475 { 00:29:37.475 "method": "bdev_iscsi_set_options", 00:29:37.475 "params": { 00:29:37.475 "timeout_sec": 30 00:29:37.475 } 00:29:37.475 }, 00:29:37.475 { 00:29:37.475 "method": "bdev_nvme_set_options", 00:29:37.475 "params": { 00:29:37.475 "action_on_timeout": "none", 00:29:37.475 "timeout_us": 0, 00:29:37.475 "timeout_admin_us": 0, 00:29:37.475 "keep_alive_timeout_ms": 10000, 00:29:37.475 "arbitration_burst": 0, 00:29:37.475 "low_priority_weight": 0, 00:29:37.475 "medium_priority_weight": 0, 00:29:37.475 "high_priority_weight": 0, 00:29:37.475 "nvme_adminq_poll_period_us": 10000, 00:29:37.475 "nvme_ioq_poll_period_us": 0, 00:29:37.475 "io_queue_requests": 512, 00:29:37.475 "delay_cmd_submit": true, 00:29:37.475 "transport_retry_count": 4, 00:29:37.475 "bdev_retry_count": 3, 00:29:37.475 "transport_ack_timeout": 0, 00:29:37.475 "ctrlr_loss_timeout_sec": 0, 00:29:37.475 "reconnect_delay_sec": 0, 00:29:37.475 "fast_io_fail_timeout_sec": 0, 00:29:37.475 "disable_auto_failback": false, 00:29:37.475 "generate_uuids": false, 00:29:37.475 "transport_tos": 0, 00:29:37.475 "nvme_error_stat": false, 00:29:37.475 "rdma_srq_size": 0, 00:29:37.475 "io_path_stat": false, 00:29:37.475 "allow_accel_sequence": false, 00:29:37.475 "rdma_max_cq_size": 0, 00:29:37.475 "rdma_cm_event_timeout_ms": 0, 00:29:37.475 "dhchap_digests": [ 00:29:37.475 "sha256", 00:29:37.475 "sha384", 00:29:37.475 "sha512" 00:29:37.475 ], 00:29:37.475 "dhchap_dhgroups": [ 00:29:37.475 "null", 00:29:37.475 "ffdhe2048", 00:29:37.475 "ffdhe3072", 00:29:37.475 "ffdhe4096", 00:29:37.475 "ffdhe6144", 00:29:37.475 "ffdhe8192" 00:29:37.475 ] 00:29:37.475 } 00:29:37.475 }, 00:29:37.475 { 00:29:37.475 "method": "bdev_nvme_attach_controller", 00:29:37.475 "params": { 00:29:37.475 "name": "TLSTEST", 00:29:37.475 "trtype": "TCP", 00:29:37.475 "adrfam": "IPv4", 00:29:37.475 "traddr": "10.0.0.2", 00:29:37.475 "trsvcid": "4420", 00:29:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.475 "prchk_reftag": false, 00:29:37.475 "prchk_guard": false, 00:29:37.475 "ctrlr_loss_timeout_sec": 0, 00:29:37.475 "reconnect_delay_sec": 0, 00:29:37.475 "fast_io_fail_timeout_sec": 0, 00:29:37.475 "psk": "/tmp/tmp.9FRkMzcmOC", 00:29:37.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.475 "hdgst": false, 00:29:37.475 "ddgst": false 00:29:37.475 } 00:29:37.475 }, 00:29:37.475 { 00:29:37.475 "method": "bdev_nvme_set_hotplug", 00:29:37.475 "params": { 00:29:37.475 "period_us": 100000, 00:29:37.475 "enable": false 00:29:37.475 } 00:29:37.475 }, 00:29:37.475 { 00:29:37.475 "method": "bdev_wait_for_examine" 00:29:37.475 } 00:29:37.475 ] 00:29:37.475 }, 00:29:37.475 { 00:29:37.475 "subsystem": "nbd", 00:29:37.475 "config": [] 00:29:37.475 } 00:29:37.475 ] 00:29:37.475 }' 00:29:37.734 [2024-07-22 17:07:39.104663] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:37.734 [2024-07-22 17:07:39.104844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78554 ] 00:29:37.734 [2024-07-22 17:07:39.289011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.027 [2024-07-22 17:07:39.562986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.611 [2024-07-22 17:07:39.918313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:38.611 [2024-07-22 17:07:40.051125] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:38.611 [2024-07-22 17:07:40.051515] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:38.611 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.611 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:38.611 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:38.924 Running I/O for 10 seconds... 00:29:48.893 00:29:48.893 Latency(us) 00:29:48.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.893 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:48.893 Verification LBA range: start 0x0 length 0x2000 00:29:48.893 TLSTESTn1 : 10.02 3493.21 13.65 0.00 0.00 36580.17 5554.96 34702.87 00:29:48.893 =================================================================================================================== 00:29:48.893 Total : 3493.21 13.65 0.00 0.00 36580.17 5554.96 34702.87 00:29:48.893 0 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 78554 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78554 ']' 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78554 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78554 00:29:48.893 killing process with pid 78554 00:29:48.893 Received shutdown signal, test time was about 10.000000 seconds 00:29:48.893 00:29:48.893 Latency(us) 00:29:48.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.893 =================================================================================================================== 00:29:48.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78554' 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78554 00:29:48.893 17:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78554 00:29:48.893 [2024-07-22 17:07:50.374394] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 78516 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78516 ']' 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78516 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78516 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78516' 00:29:50.844 killing process with pid 78516 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78516 00:29:50.844 [2024-07-22 17:07:51.990032] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:50.844 17:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78516 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78717 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78717 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78717 ']' 00:29:52.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.229 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:52.488 [2024-07-22 17:07:53.862528] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:52.488 [2024-07-22 17:07:53.862698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.488 [2024-07-22 17:07:54.037132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.745 [2024-07-22 17:07:54.297537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.745 [2024-07-22 17:07:54.297624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.745 [2024-07-22 17:07:54.297645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.745 [2024-07-22 17:07:54.297668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.746 [2024-07-22 17:07:54.297686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.746 [2024-07-22 17:07:54.297759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.004 [2024-07-22 17:07:54.575267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9FRkMzcmOC 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9FRkMzcmOC 00:29:53.262 17:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:53.520 [2024-07-22 17:07:54.988613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.520 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:53.778 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:54.036 [2024-07-22 17:07:55.444819] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:54.036 [2024-07-22 17:07:55.445104] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.036 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:54.294 malloc0 00:29:54.294 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:54.552 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9FRkMzcmOC 00:29:54.810 [2024-07-22 17:07:56.208683] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:54.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=78772 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 78772 /var/tmp/bdevperf.sock 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78772 ']' 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.810 17:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:54.810 [2024-07-22 17:07:56.329712] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:54.810 [2024-07-22 17:07:56.329861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78772 ] 00:29:55.068 [2024-07-22 17:07:56.501030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.327 [2024-07-22 17:07:56.770082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.586 [2024-07-22 17:07:57.052327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:55.846 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.846 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:55.846 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9FRkMzcmOC 00:29:56.105 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:29:56.105 [2024-07-22 17:07:57.679208] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:56.363 nvme0n1 00:29:56.363 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:56.363 Running I/O for 1 seconds... 00:29:57.298 00:29:57.298 Latency(us) 00:29:57.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.298 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:57.298 Verification LBA range: start 0x0 length 0x2000 00:29:57.298 nvme0n1 : 1.02 3739.04 14.61 0.00 0.00 33864.41 7489.83 35701.52 00:29:57.298 =================================================================================================================== 00:29:57.298 Total : 3739.04 14.61 0.00 0.00 33864.41 7489.83 35701.52 00:29:57.298 0 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 78772 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78772 ']' 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78772 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78772 00:29:57.556 killing process with pid 78772 00:29:57.556 Received shutdown signal, test time was about 1.000000 seconds 00:29:57.556 00:29:57.556 Latency(us) 00:29:57.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.556 =================================================================================================================== 00:29:57.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78772' 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78772 00:29:57.556 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78772 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 78717 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78717 ']' 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78717 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78717 00:29:58.934 killing process with pid 78717 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78717' 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78717 00:29:58.934 [2024-07-22 17:08:00.308288] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:58.934 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78717 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78849 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78849 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78849 ']' 00:30:00.314 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.315 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:00.315 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.315 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:00.315 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.572 [2024-07-22 17:08:02.045733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:00.572 [2024-07-22 17:08:02.045884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.829 [2024-07-22 17:08:02.210089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.087 [2024-07-22 17:08:02.585356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.087 [2024-07-22 17:08:02.585453] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.087 [2024-07-22 17:08:02.585494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.087 [2024-07-22 17:08:02.585519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.087 [2024-07-22 17:08:02.585538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.087 [2024-07-22 17:08:02.585620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.345 [2024-07-22 17:08:02.872290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.603 [2024-07-22 17:08:03.101224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.603 malloc0 00:30:01.603 [2024-07-22 17:08:03.181532] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:01.603 [2024-07-22 17:08:03.181836] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=78882 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 78882 /var/tmp/bdevperf.sock 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78882 ']' 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:01.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.603 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.899 [2024-07-22 17:08:03.305841] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:01.899 [2024-07-22 17:08:03.306222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78882 ] 00:30:01.899 [2024-07-22 17:08:03.479349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.464 [2024-07-22 17:08:03.822284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.721 [2024-07-22 17:08:04.104164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:02.721 17:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.721 17:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:02.721 17:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9FRkMzcmOC 00:30:02.978 17:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:30:03.237 [2024-07-22 17:08:04.787454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:03.495 nvme0n1 00:30:03.495 17:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.495 Running I/O for 1 seconds... 00:30:04.430 00:30:04.430 Latency(us) 00:30:04.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.430 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:04.431 Verification LBA range: start 0x0 length 0x2000 00:30:04.431 nvme0n1 : 1.03 3738.35 14.60 0.00 0.00 33863.59 7926.74 21346.01 00:30:04.431 =================================================================================================================== 00:30:04.431 Total : 3738.35 14.60 0.00 0.00 33863.59 7926.74 21346.01 00:30:04.431 0 00:30:04.431 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:30:04.431 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.431 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:04.729 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.729 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:30:04.729 "subsystems": [ 00:30:04.729 { 00:30:04.729 "subsystem": "keyring", 00:30:04.729 "config": [ 00:30:04.729 { 00:30:04.729 "method": "keyring_file_add_key", 00:30:04.729 "params": { 00:30:04.729 "name": "key0", 00:30:04.729 "path": "/tmp/tmp.9FRkMzcmOC" 00:30:04.729 } 00:30:04.729 } 00:30:04.729 ] 00:30:04.729 }, 00:30:04.729 { 00:30:04.729 "subsystem": "iobuf", 00:30:04.729 "config": [ 00:30:04.729 { 00:30:04.729 "method": "iobuf_set_options", 00:30:04.729 "params": { 00:30:04.729 "small_pool_count": 8192, 00:30:04.729 "large_pool_count": 1024, 00:30:04.729 "small_bufsize": 8192, 00:30:04.729 "large_bufsize": 135168 00:30:04.729 } 00:30:04.729 } 00:30:04.729 ] 00:30:04.729 }, 00:30:04.729 { 00:30:04.729 "subsystem": "sock", 00:30:04.729 "config": [ 00:30:04.729 { 00:30:04.729 "method": "sock_set_default_impl", 00:30:04.729 "params": { 00:30:04.729 "impl_name": "uring" 00:30:04.729 } 00:30:04.729 }, 00:30:04.729 { 00:30:04.729 "method": "sock_impl_set_options", 00:30:04.729 "params": { 00:30:04.729 "impl_name": "ssl", 00:30:04.729 "recv_buf_size": 4096, 00:30:04.729 "send_buf_size": 4096, 00:30:04.729 "enable_recv_pipe": true, 00:30:04.729 "enable_quickack": false, 00:30:04.729 "enable_placement_id": 0, 00:30:04.729 "enable_zerocopy_send_server": true, 00:30:04.729 "enable_zerocopy_send_client": false, 00:30:04.729 "zerocopy_threshold": 0, 00:30:04.729 "tls_version": 0, 00:30:04.729 "enable_ktls": false 00:30:04.729 } 00:30:04.729 }, 00:30:04.729 { 00:30:04.729 "method": "sock_impl_set_options", 00:30:04.729 "params": { 00:30:04.729 "impl_name": "posix", 00:30:04.729 "recv_buf_size": 2097152, 00:30:04.729 "send_buf_size": 2097152, 00:30:04.729 "enable_recv_pipe": true, 00:30:04.729 "enable_quickack": false, 00:30:04.729 "enable_placement_id": 0, 00:30:04.729 "enable_zerocopy_send_server": true, 00:30:04.729 "enable_zerocopy_send_client": false, 00:30:04.729 "zerocopy_threshold": 0, 00:30:04.729 "tls_version": 0, 00:30:04.729 "enable_ktls": false 00:30:04.729 } 00:30:04.729 }, 00:30:04.729 { 00:30:04.729 "method": "sock_impl_set_options", 00:30:04.729 "params": { 00:30:04.729 "impl_name": "uring", 00:30:04.729 "recv_buf_size": 2097152, 00:30:04.729 "send_buf_size": 2097152, 00:30:04.729 "enable_recv_pipe": true, 00:30:04.730 "enable_quickack": false, 00:30:04.730 "enable_placement_id": 0, 00:30:04.730 "enable_zerocopy_send_server": false, 00:30:04.730 "enable_zerocopy_send_client": false, 00:30:04.730 "zerocopy_threshold": 0, 00:30:04.730 "tls_version": 0, 00:30:04.730 "enable_ktls": false 00:30:04.730 } 00:30:04.730 } 00:30:04.730 ] 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "subsystem": "vmd", 00:30:04.730 "config": [] 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "subsystem": "accel", 00:30:04.730 "config": [ 00:30:04.730 { 00:30:04.730 "method": "accel_set_options", 00:30:04.730 "params": { 00:30:04.730 "small_cache_size": 128, 00:30:04.730 "large_cache_size": 16, 00:30:04.730 "task_count": 2048, 00:30:04.730 "sequence_count": 2048, 00:30:04.730 "buf_count": 2048 00:30:04.730 } 00:30:04.730 } 00:30:04.730 ] 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "subsystem": "bdev", 00:30:04.730 "config": [ 00:30:04.730 { 00:30:04.730 "method": "bdev_set_options", 00:30:04.730 "params": { 00:30:04.730 "bdev_io_pool_size": 65535, 00:30:04.730 "bdev_io_cache_size": 256, 00:30:04.730 "bdev_auto_examine": true, 00:30:04.730 "iobuf_small_cache_size": 128, 00:30:04.730 "iobuf_large_cache_size": 16 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "bdev_raid_set_options", 00:30:04.730 "params": { 00:30:04.730 "process_window_size_kb": 1024, 00:30:04.730 "process_max_bandwidth_mb_sec": 0 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "bdev_iscsi_set_options", 00:30:04.730 "params": { 00:30:04.730 "timeout_sec": 30 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "bdev_nvme_set_options", 00:30:04.730 "params": { 00:30:04.730 "action_on_timeout": "none", 00:30:04.730 "timeout_us": 0, 00:30:04.730 "timeout_admin_us": 0, 00:30:04.730 "keep_alive_timeout_ms": 10000, 00:30:04.730 "arbitration_burst": 0, 00:30:04.730 "low_priority_weight": 0, 00:30:04.730 "medium_priority_weight": 0, 00:30:04.730 "high_priority_weight": 0, 00:30:04.730 "nvme_adminq_poll_period_us": 10000, 00:30:04.730 "nvme_ioq_poll_period_us": 0, 00:30:04.730 "io_queue_requests": 0, 00:30:04.730 "delay_cmd_submit": true, 00:30:04.730 "transport_retry_count": 4, 00:30:04.730 "bdev_retry_count": 3, 00:30:04.730 "transport_ack_timeout": 0, 00:30:04.730 "ctrlr_loss_timeout_sec": 0, 00:30:04.730 "reconnect_delay_sec": 0, 00:30:04.730 "fast_io_fail_timeout_sec": 0, 00:30:04.730 "disable_auto_failback": false, 00:30:04.730 "generate_uuids": false, 00:30:04.730 "transport_tos": 0, 00:30:04.730 "nvme_error_stat": false, 00:30:04.730 "rdma_srq_size": 0, 00:30:04.730 "io_path_stat": false, 00:30:04.730 "allow_accel_sequence": false, 00:30:04.730 "rdma_max_cq_size": 0, 00:30:04.730 "rdma_cm_event_timeout_ms": 0, 00:30:04.730 "dhchap_digests": [ 00:30:04.730 "sha256", 00:30:04.730 "sha384", 00:30:04.730 "sha512" 00:30:04.730 ], 00:30:04.730 "dhchap_dhgroups": [ 00:30:04.730 "null", 00:30:04.730 "ffdhe2048", 00:30:04.730 "ffdhe3072", 00:30:04.730 "ffdhe4096", 00:30:04.730 "ffdhe6144", 00:30:04.730 "ffdhe8192" 00:30:04.730 ] 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "bdev_nvme_set_hotplug", 00:30:04.730 "params": { 00:30:04.730 "period_us": 100000, 00:30:04.730 "enable": false 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "bdev_malloc_create", 00:30:04.730 "params": { 00:30:04.730 "name": "malloc0", 00:30:04.730 "num_blocks": 8192, 00:30:04.730 "block_size": 4096, 00:30:04.730 "physical_block_size": 4096, 00:30:04.730 "uuid": "4faeafb8-9061-46b6-8591-51920d51429c", 00:30:04.730 "optimal_io_boundary": 0, 00:30:04.730 "md_size": 0, 00:30:04.730 "dif_type": 0, 00:30:04.730 "dif_is_head_of_md": false, 00:30:04.730 "dif_pi_format": 0 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "bdev_wait_for_examine" 00:30:04.730 } 00:30:04.730 ] 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "subsystem": "nbd", 00:30:04.730 "config": [] 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "subsystem": "scheduler", 00:30:04.730 "config": [ 00:30:04.730 { 00:30:04.730 "method": "framework_set_scheduler", 00:30:04.730 "params": { 00:30:04.730 "name": "static" 00:30:04.730 } 00:30:04.730 } 00:30:04.730 ] 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "subsystem": "nvmf", 00:30:04.730 "config": [ 00:30:04.730 { 00:30:04.730 "method": "nvmf_set_config", 00:30:04.730 "params": { 00:30:04.730 "discovery_filter": "match_any", 00:30:04.730 "admin_cmd_passthru": { 00:30:04.730 "identify_ctrlr": false 00:30:04.730 } 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "nvmf_set_max_subsystems", 00:30:04.730 "params": { 00:30:04.730 "max_subsystems": 1024 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "nvmf_set_crdt", 00:30:04.730 "params": { 00:30:04.730 "crdt1": 0, 00:30:04.730 "crdt2": 0, 00:30:04.730 "crdt3": 0 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "nvmf_create_transport", 00:30:04.730 "params": { 00:30:04.730 "trtype": "TCP", 00:30:04.730 "max_queue_depth": 128, 00:30:04.730 "max_io_qpairs_per_ctrlr": 127, 00:30:04.730 "in_capsule_data_size": 4096, 00:30:04.730 "max_io_size": 131072, 00:30:04.730 "io_unit_size": 131072, 00:30:04.730 "max_aq_depth": 128, 00:30:04.730 "num_shared_buffers": 511, 00:30:04.730 "buf_cache_size": 4294967295, 00:30:04.730 "dif_insert_or_strip": false, 00:30:04.730 "zcopy": false, 00:30:04.730 "c2h_success": false, 00:30:04.730 "sock_priority": 0, 00:30:04.730 "abort_timeout_sec": 1, 00:30:04.730 "ack_timeout": 0, 00:30:04.730 "data_wr_pool_size": 0 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "nvmf_create_subsystem", 00:30:04.730 "params": { 00:30:04.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.730 "allow_any_host": false, 00:30:04.730 "serial_number": "00000000000000000000", 00:30:04.730 "model_number": "SPDK bdev Controller", 00:30:04.730 "max_namespaces": 32, 00:30:04.730 "min_cntlid": 1, 00:30:04.730 "max_cntlid": 65519, 00:30:04.730 "ana_reporting": false 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "nvmf_subsystem_add_host", 00:30:04.730 "params": { 00:30:04.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.730 "host": "nqn.2016-06.io.spdk:host1", 00:30:04.730 "psk": "key0" 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "nvmf_subsystem_add_ns", 00:30:04.730 "params": { 00:30:04.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.730 "namespace": { 00:30:04.730 "nsid": 1, 00:30:04.730 "bdev_name": "malloc0", 00:30:04.730 "nguid": "4FAEAFB8906146B6859151920D51429C", 00:30:04.730 "uuid": "4faeafb8-9061-46b6-8591-51920d51429c", 00:30:04.730 "no_auto_visible": false 00:30:04.730 } 00:30:04.730 } 00:30:04.730 }, 00:30:04.730 { 00:30:04.730 "method": "nvmf_subsystem_add_listener", 00:30:04.730 "params": { 00:30:04.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.730 "listen_address": { 00:30:04.730 "trtype": "TCP", 00:30:04.730 "adrfam": "IPv4", 00:30:04.730 "traddr": "10.0.0.2", 00:30:04.730 "trsvcid": "4420" 00:30:04.730 }, 00:30:04.730 "secure_channel": false, 00:30:04.730 "sock_impl": "ssl" 00:30:04.730 } 00:30:04.730 } 00:30:04.730 ] 00:30:04.730 } 00:30:04.730 ] 00:30:04.730 }' 00:30:04.730 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:30:04.989 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:30:04.989 "subsystems": [ 00:30:04.989 { 00:30:04.989 "subsystem": "keyring", 00:30:04.989 "config": [ 00:30:04.989 { 00:30:04.989 "method": "keyring_file_add_key", 00:30:04.989 "params": { 00:30:04.989 "name": "key0", 00:30:04.989 "path": "/tmp/tmp.9FRkMzcmOC" 00:30:04.989 } 00:30:04.989 } 00:30:04.989 ] 00:30:04.989 }, 00:30:04.989 { 00:30:04.989 "subsystem": "iobuf", 00:30:04.989 "config": [ 00:30:04.989 { 00:30:04.989 "method": "iobuf_set_options", 00:30:04.989 "params": { 00:30:04.989 "small_pool_count": 8192, 00:30:04.989 "large_pool_count": 1024, 00:30:04.989 "small_bufsize": 8192, 00:30:04.989 "large_bufsize": 135168 00:30:04.989 } 00:30:04.989 } 00:30:04.989 ] 00:30:04.989 }, 00:30:04.989 { 00:30:04.989 "subsystem": "sock", 00:30:04.989 "config": [ 00:30:04.989 { 00:30:04.989 "method": "sock_set_default_impl", 00:30:04.989 "params": { 00:30:04.989 "impl_name": "uring" 00:30:04.989 } 00:30:04.989 }, 00:30:04.989 { 00:30:04.989 "method": "sock_impl_set_options", 00:30:04.989 "params": { 00:30:04.989 "impl_name": "ssl", 00:30:04.989 "recv_buf_size": 4096, 00:30:04.989 "send_buf_size": 4096, 00:30:04.989 "enable_recv_pipe": true, 00:30:04.989 "enable_quickack": false, 00:30:04.989 "enable_placement_id": 0, 00:30:04.989 "enable_zerocopy_send_server": true, 00:30:04.989 "enable_zerocopy_send_client": false, 00:30:04.989 "zerocopy_threshold": 0, 00:30:04.989 "tls_version": 0, 00:30:04.989 "enable_ktls": false 00:30:04.989 } 00:30:04.989 }, 00:30:04.989 { 00:30:04.989 "method": "sock_impl_set_options", 00:30:04.989 "params": { 00:30:04.989 "impl_name": "posix", 00:30:04.989 "recv_buf_size": 2097152, 00:30:04.989 "send_buf_size": 2097152, 00:30:04.989 "enable_recv_pipe": true, 00:30:04.989 "enable_quickack": false, 00:30:04.989 "enable_placement_id": 0, 00:30:04.989 "enable_zerocopy_send_server": true, 00:30:04.989 "enable_zerocopy_send_client": false, 00:30:04.989 "zerocopy_threshold": 0, 00:30:04.989 "tls_version": 0, 00:30:04.989 "enable_ktls": false 00:30:04.989 } 00:30:04.989 }, 00:30:04.989 { 00:30:04.989 "method": "sock_impl_set_options", 00:30:04.989 "params": { 00:30:04.989 "impl_name": "uring", 00:30:04.989 "recv_buf_size": 2097152, 00:30:04.989 "send_buf_size": 2097152, 00:30:04.989 "enable_recv_pipe": true, 00:30:04.989 "enable_quickack": false, 00:30:04.989 "enable_placement_id": 0, 00:30:04.989 "enable_zerocopy_send_server": false, 00:30:04.989 "enable_zerocopy_send_client": false, 00:30:04.989 "zerocopy_threshold": 0, 00:30:04.989 "tls_version": 0, 00:30:04.989 "enable_ktls": false 00:30:04.989 } 00:30:04.989 } 00:30:04.989 ] 00:30:04.989 }, 00:30:04.989 { 00:30:04.989 "subsystem": "vmd", 00:30:04.989 "config": [] 00:30:04.989 }, 00:30:04.989 { 00:30:04.989 "subsystem": "accel", 00:30:04.989 "config": [ 00:30:04.990 { 00:30:04.990 "method": "accel_set_options", 00:30:04.990 "params": { 00:30:04.990 "small_cache_size": 128, 00:30:04.990 "large_cache_size": 16, 00:30:04.990 "task_count": 2048, 00:30:04.990 "sequence_count": 2048, 00:30:04.990 "buf_count": 2048 00:30:04.990 } 00:30:04.990 } 00:30:04.990 ] 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "subsystem": "bdev", 00:30:04.990 "config": [ 00:30:04.990 { 00:30:04.990 "method": "bdev_set_options", 00:30:04.990 "params": { 00:30:04.990 "bdev_io_pool_size": 65535, 00:30:04.990 "bdev_io_cache_size": 256, 00:30:04.990 "bdev_auto_examine": true, 00:30:04.990 "iobuf_small_cache_size": 128, 00:30:04.990 "iobuf_large_cache_size": 16 00:30:04.990 } 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "method": "bdev_raid_set_options", 00:30:04.990 "params": { 00:30:04.990 "process_window_size_kb": 1024, 00:30:04.990 "process_max_bandwidth_mb_sec": 0 00:30:04.990 } 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "method": "bdev_iscsi_set_options", 00:30:04.990 "params": { 00:30:04.990 "timeout_sec": 30 00:30:04.990 } 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "method": "bdev_nvme_set_options", 00:30:04.990 "params": { 00:30:04.990 "action_on_timeout": "none", 00:30:04.990 "timeout_us": 0, 00:30:04.990 "timeout_admin_us": 0, 00:30:04.990 "keep_alive_timeout_ms": 10000, 00:30:04.990 "arbitration_burst": 0, 00:30:04.990 "low_priority_weight": 0, 00:30:04.990 "medium_priority_weight": 0, 00:30:04.990 "high_priority_weight": 0, 00:30:04.990 "nvme_adminq_poll_period_us": 10000, 00:30:04.990 "nvme_ioq_poll_period_us": 0, 00:30:04.990 "io_queue_requests": 512, 00:30:04.990 "delay_cmd_submit": true, 00:30:04.990 "transport_retry_count": 4, 00:30:04.990 "bdev_retry_count": 3, 00:30:04.990 "transport_ack_timeout": 0, 00:30:04.990 "ctrlr_loss_timeout_sec": 0, 00:30:04.990 "reconnect_delay_sec": 0, 00:30:04.990 "fast_io_fail_timeout_sec": 0, 00:30:04.990 "disable_auto_failback": false, 00:30:04.990 "generate_uuids": false, 00:30:04.990 "transport_tos": 0, 00:30:04.990 "nvme_error_stat": false, 00:30:04.990 "rdma_srq_size": 0, 00:30:04.990 "io_path_stat": false, 00:30:04.990 "allow_accel_sequence": false, 00:30:04.990 "rdma_max_cq_size": 0, 00:30:04.990 "rdma_cm_event_timeout_ms": 0, 00:30:04.990 "dhchap_digests": [ 00:30:04.990 "sha256", 00:30:04.990 "sha384", 00:30:04.990 "sha512" 00:30:04.990 ], 00:30:04.990 "dhchap_dhgroups": [ 00:30:04.990 "null", 00:30:04.990 "ffdhe2048", 00:30:04.990 "ffdhe3072", 00:30:04.990 "ffdhe4096", 00:30:04.990 "ffdhe6144", 00:30:04.990 "ffdhe8192" 00:30:04.990 ] 00:30:04.990 } 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "method": "bdev_nvme_attach_controller", 00:30:04.990 "params": { 00:30:04.990 "name": "nvme0", 00:30:04.990 "trtype": "TCP", 00:30:04.990 "adrfam": "IPv4", 00:30:04.990 "traddr": "10.0.0.2", 00:30:04.990 "trsvcid": "4420", 00:30:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.990 "prchk_reftag": false, 00:30:04.990 "prchk_guard": false, 00:30:04.990 "ctrlr_loss_timeout_sec": 0, 00:30:04.990 "reconnect_delay_sec": 0, 00:30:04.990 "fast_io_fail_timeout_sec": 0, 00:30:04.990 "psk": "key0", 00:30:04.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.990 "hdgst": false, 00:30:04.990 "ddgst": false 00:30:04.990 } 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "method": "bdev_nvme_set_hotplug", 00:30:04.990 "params": { 00:30:04.990 "period_us": 100000, 00:30:04.990 "enable": false 00:30:04.990 } 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "method": "bdev_enable_histogram", 00:30:04.990 "params": { 00:30:04.990 "name": "nvme0n1", 00:30:04.990 "enable": true 00:30:04.990 } 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "method": "bdev_wait_for_examine" 00:30:04.990 } 00:30:04.990 ] 00:30:04.990 }, 00:30:04.990 { 00:30:04.990 "subsystem": "nbd", 00:30:04.990 "config": [] 00:30:04.990 } 00:30:04.990 ] 00:30:04.990 }' 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 78882 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78882 ']' 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78882 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78882 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:04.990 killing process with pid 78882 00:30:04.990 Received shutdown signal, test time was about 1.000000 seconds 00:30:04.990 00:30:04.990 Latency(us) 00:30:04.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.990 =================================================================================================================== 00:30:04.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78882' 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78882 00:30:04.990 17:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78882 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 78849 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78849 ']' 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78849 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78849 00:30:06.944 killing process with pid 78849 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78849' 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78849 00:30:06.944 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78849 00:30:08.363 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:30:08.363 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:08.363 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:30:08.363 "subsystems": [ 00:30:08.363 { 00:30:08.363 "subsystem": "keyring", 00:30:08.363 "config": [ 00:30:08.363 { 00:30:08.363 "method": "keyring_file_add_key", 00:30:08.363 "params": { 00:30:08.363 "name": "key0", 00:30:08.363 "path": "/tmp/tmp.9FRkMzcmOC" 00:30:08.363 } 00:30:08.363 } 00:30:08.363 ] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "iobuf", 00:30:08.363 "config": [ 00:30:08.363 { 00:30:08.363 "method": "iobuf_set_options", 00:30:08.363 "params": { 00:30:08.363 "small_pool_count": 8192, 00:30:08.363 "large_pool_count": 1024, 00:30:08.363 "small_bufsize": 8192, 00:30:08.363 "large_bufsize": 135168 00:30:08.363 } 00:30:08.363 } 00:30:08.363 ] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "sock", 00:30:08.363 "config": [ 00:30:08.363 { 00:30:08.363 "method": "sock_set_default_impl", 00:30:08.363 "params": { 00:30:08.363 "impl_name": "uring" 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "sock_impl_set_options", 00:30:08.363 "params": { 00:30:08.363 "impl_name": "ssl", 00:30:08.363 "recv_buf_size": 4096, 00:30:08.363 "send_buf_size": 4096, 00:30:08.363 "enable_recv_pipe": true, 00:30:08.363 "enable_quickack": false, 00:30:08.363 "enable_placement_id": 0, 00:30:08.363 "enable_zerocopy_send_server": true, 00:30:08.363 "enable_zerocopy_send_client": false, 00:30:08.363 "zerocopy_threshold": 0, 00:30:08.363 "tls_version": 0, 00:30:08.363 "enable_ktls": false 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "sock_impl_set_options", 00:30:08.363 "params": { 00:30:08.363 "impl_name": "posix", 00:30:08.363 "recv_buf_size": 2097152, 00:30:08.363 "send_buf_size": 2097152, 00:30:08.363 "enable_recv_pipe": true, 00:30:08.363 "enable_quickack": false, 00:30:08.363 "enable_placement_id": 0, 00:30:08.363 "enable_zerocopy_send_server": true, 00:30:08.363 "enable_zerocopy_send_client": false, 00:30:08.363 "zerocopy_threshold": 0, 00:30:08.363 "tls_version": 0, 00:30:08.363 "enable_ktls": false 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "sock_impl_set_options", 00:30:08.363 "params": { 00:30:08.363 "impl_name": "uring", 00:30:08.363 "recv_buf_size": 2097152, 00:30:08.363 "send_buf_size": 2097152, 00:30:08.363 "enable_recv_pipe": true, 00:30:08.363 "enable_quickack": false, 00:30:08.363 "enable_placement_id": 0, 00:30:08.363 "enable_zerocopy_send_server": false, 00:30:08.363 "enable_zerocopy_send_client": false, 00:30:08.363 "zerocopy_threshold": 0, 00:30:08.363 "tls_version": 0, 00:30:08.363 "enable_ktls": false 00:30:08.363 } 00:30:08.363 } 00:30:08.363 ] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "vmd", 00:30:08.363 "config": [] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "accel", 00:30:08.363 "config": [ 00:30:08.363 { 00:30:08.363 "method": "accel_set_options", 00:30:08.363 "params": { 00:30:08.363 "small_cache_size": 128, 00:30:08.363 "large_cache_size": 16, 00:30:08.363 "task_count": 2048, 00:30:08.363 "sequence_count": 2048, 00:30:08.363 "buf_count": 2048 00:30:08.363 } 00:30:08.363 } 00:30:08.363 ] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "bdev", 00:30:08.363 "config": [ 00:30:08.363 { 00:30:08.363 "method": "bdev_set_options", 00:30:08.363 "params": { 00:30:08.363 "bdev_io_pool_size": 65535, 00:30:08.363 "bdev_io_cache_size": 256, 00:30:08.363 "bdev_auto_examine": true, 00:30:08.363 "iobuf_small_cache_size": 128, 00:30:08.363 "iobuf_large_cache_size": 16 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "bdev_raid_set_options", 00:30:08.363 "params": { 00:30:08.363 "process_window_size_kb": 1024, 00:30:08.363 "process_max_bandwidth_mb_sec": 0 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "bdev_iscsi_set_options", 00:30:08.363 "params": { 00:30:08.363 "timeout_sec": 30 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "bdev_nvme_set_options", 00:30:08.363 "params": { 00:30:08.363 "action_on_timeout": "none", 00:30:08.363 "timeout_us": 0, 00:30:08.363 "timeout_admin_us": 0, 00:30:08.363 "keep_alive_timeout_ms": 10000, 00:30:08.363 "arbitration_burst": 0, 00:30:08.363 "low_priority_weight": 0, 00:30:08.363 "medium_priority_weight": 0, 00:30:08.363 "high_priority_weight": 0, 00:30:08.363 "nvme_adminq_poll_period_us": 10000, 00:30:08.363 "nvme_ioq_poll_period_us": 0, 00:30:08.363 "io_queue_requests": 0, 00:30:08.363 "delay_cmd_submit": true, 00:30:08.363 "transport_retry_count": 4, 00:30:08.363 "bdev_retry_count": 3, 00:30:08.363 "transport_ack_timeout": 0, 00:30:08.363 "ctrlr_loss_timeout_sec": 0, 00:30:08.363 "reconnect_delay_sec": 0, 00:30:08.363 "fast_io_fail_timeout_sec": 0, 00:30:08.363 "disable_auto_failback": false, 00:30:08.363 "generate_uuids": false, 00:30:08.363 "transport_tos": 0, 00:30:08.363 "nvme_error_stat": false, 00:30:08.363 "rdma_srq_size": 0, 00:30:08.363 "io_path_stat": false, 00:30:08.363 "allow_accel_sequence": false, 00:30:08.363 "rdma_max_cq_size": 0, 00:30:08.363 "rdma_cm_event_timeout_ms": 0, 00:30:08.363 "dhchap_digests": [ 00:30:08.363 "sha256", 00:30:08.363 "sha384", 00:30:08.363 "sha512" 00:30:08.363 ], 00:30:08.363 "dhchap_dhgroups": [ 00:30:08.363 "null", 00:30:08.363 "ffdhe2048", 00:30:08.363 "ffdhe3072", 00:30:08.363 "ffdhe4096", 00:30:08.363 "ffdhe6144", 00:30:08.363 "ffdhe8192" 00:30:08.363 ] 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "bdev_nvme_set_hotplug", 00:30:08.363 "params": { 00:30:08.363 "period_us": 100000, 00:30:08.363 "enable": false 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "bdev_malloc_create", 00:30:08.363 "params": { 00:30:08.363 "name": "malloc0", 00:30:08.363 "num_blocks": 8192, 00:30:08.363 "block_size": 4096, 00:30:08.363 "physical_block_size": 4096, 00:30:08.363 "uuid": "4faeafb8-9061-46b6-8591-51920d51429c", 00:30:08.363 "optimal_io_boundary": 0, 00:30:08.363 "md_size": 0, 00:30:08.363 "dif_type": 0, 00:30:08.363 "dif_is_head_of_md": false, 00:30:08.363 "dif_pi_format": 0 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "bdev_wait_for_examine" 00:30:08.363 } 00:30:08.363 ] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "nbd", 00:30:08.363 "config": [] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "scheduler", 00:30:08.363 "config": [ 00:30:08.363 { 00:30:08.363 "method": "framework_set_scheduler", 00:30:08.363 "params": { 00:30:08.363 "name": "static" 00:30:08.363 } 00:30:08.363 } 00:30:08.363 ] 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "subsystem": "nvmf", 00:30:08.363 "config": [ 00:30:08.363 { 00:30:08.363 "method": "nvmf_set_config", 00:30:08.363 "params": { 00:30:08.363 "discovery_filter": "match_any", 00:30:08.363 "admin_cmd_passthru": { 00:30:08.363 "identify_ctrlr": false 00:30:08.363 } 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "nvmf_set_max_subsystems", 00:30:08.363 "params": { 00:30:08.363 "max_subsystems": 1024 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "nvmf_set_crdt", 00:30:08.363 "params": { 00:30:08.363 "crdt1": 0, 00:30:08.363 "crdt2": 0, 00:30:08.363 "crdt3": 0 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.363 "method": "nvmf_create_transport", 00:30:08.363 "params": { 00:30:08.363 "trtype": "TCP", 00:30:08.363 "max_queue_depth": 128, 00:30:08.363 "max_io_qpairs_per_ctrlr": 127, 00:30:08.363 "in_capsule_data_size": 4096, 00:30:08.363 "max_io_size": 131072, 00:30:08.363 "io_unit_size": 131072, 00:30:08.363 "max_aq_depth": 128, 00:30:08.363 "num_shared_buffers": 511, 00:30:08.363 "buf_cache_size": 4294967295, 00:30:08.363 "dif_insert_or_strip": false, 00:30:08.363 "zcopy": false, 00:30:08.363 "c2h_success": false, 00:30:08.363 "sock_priority": 0, 00:30:08.363 "abort_timeout_sec": 1, 00:30:08.363 "ack_timeout": 0, 00:30:08.363 "data_wr_pool_size": 0 00:30:08.363 } 00:30:08.363 }, 00:30:08.363 { 00:30:08.364 "method": "nvmf_create_subsystem", 00:30:08.364 "params": { 00:30:08.364 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.364 "allow_any_host": false, 00:30:08.364 "serial_number": "00000000000000000000", 00:30:08.364 "model_number": "SPDK bdev Controller", 00:30:08.364 "max_namespaces": 32, 00:30:08.364 "min_cntlid": 1, 00:30:08.364 "max_cntlid": 65519, 00:30:08.364 "ana_reporting": false 00:30:08.364 } 00:30:08.364 }, 00:30:08.364 { 00:30:08.364 "method": "nvmf_subsystem_add_host", 00:30:08.364 "params": { 00:30:08.364 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.364 "host": "nqn.2016-06.io.spdk:host1", 00:30:08.364 "psk": "key0" 00:30:08.364 } 00:30:08.364 }, 00:30:08.364 { 00:30:08.364 "method": "nvmf_subsystem_add_ns", 00:30:08.364 "params": { 00:30:08.364 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.364 "namespace": { 00:30:08.364 "nsid": 1, 00:30:08.364 "bdev_name": "malloc0", 00:30:08.364 "nguid": "4FAEAFB8906146B6859151920D51429C", 00:30:08.364 "uuid": "4faeafb8-9061-46b6-8591-51920d51429c", 00:30:08.364 "no_auto_visible": false 00:30:08.364 } 00:30:08.364 } 00:30:08.364 }, 00:30:08.364 { 00:30:08.364 "method": "nvmf_subsystem_add_listener", 00:30:08.364 "params": { 00:30:08.364 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.364 "listen_address": { 00:30:08.364 "trtype": "TCP", 00:30:08.364 "adrfam": "IPv4", 00:30:08.364 "traddr": "10.0.0.2", 00:30:08.364 "trsvcid": "4420" 00:30:08.364 }, 00:30:08.364 "secure_channel": false, 00:30:08.364 "sock_impl": "ssl" 00:30:08.364 } 00:30:08.364 } 00:30:08.364 ] 00:30:08.364 } 00:30:08.364 ] 00:30:08.364 }' 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78977 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78977 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78977 ']' 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:08.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:08.364 17:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 [2024-07-22 17:08:09.977402] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:08.364 [2024-07-22 17:08:09.977547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.623 [2024-07-22 17:08:10.152005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.881 [2024-07-22 17:08:10.417184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.881 [2024-07-22 17:08:10.417271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.881 [2024-07-22 17:08:10.417289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.881 [2024-07-22 17:08:10.417305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.881 [2024-07-22 17:08:10.417317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.881 [2024-07-22 17:08:10.417487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.447 [2024-07-22 17:08:10.809224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:09.447 [2024-07-22 17:08:11.035747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.706 [2024-07-22 17:08:11.080580] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:09.706 [2024-07-22 17:08:11.080863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.706 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:09.706 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:09.706 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:09.706 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.706 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:09.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.706 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=79009 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 79009 /var/tmp/bdevperf.sock 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 79009 ']' 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:30:09.707 17:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:30:09.707 "subsystems": [ 00:30:09.707 { 00:30:09.707 "subsystem": "keyring", 00:30:09.707 "config": [ 00:30:09.707 { 00:30:09.707 "method": "keyring_file_add_key", 00:30:09.707 "params": { 00:30:09.707 "name": "key0", 00:30:09.707 "path": "/tmp/tmp.9FRkMzcmOC" 00:30:09.707 } 00:30:09.707 } 00:30:09.707 ] 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "subsystem": "iobuf", 00:30:09.707 "config": [ 00:30:09.707 { 00:30:09.707 "method": "iobuf_set_options", 00:30:09.707 "params": { 00:30:09.707 "small_pool_count": 8192, 00:30:09.707 "large_pool_count": 1024, 00:30:09.707 "small_bufsize": 8192, 00:30:09.707 "large_bufsize": 135168 00:30:09.707 } 00:30:09.707 } 00:30:09.707 ] 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "subsystem": "sock", 00:30:09.707 "config": [ 00:30:09.707 { 00:30:09.707 "method": "sock_set_default_impl", 00:30:09.707 "params": { 00:30:09.707 "impl_name": "uring" 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "sock_impl_set_options", 00:30:09.707 "params": { 00:30:09.707 "impl_name": "ssl", 00:30:09.707 "recv_buf_size": 4096, 00:30:09.707 "send_buf_size": 4096, 00:30:09.707 "enable_recv_pipe": true, 00:30:09.707 "enable_quickack": false, 00:30:09.707 "enable_placement_id": 0, 00:30:09.707 "enable_zerocopy_send_server": true, 00:30:09.707 "enable_zerocopy_send_client": false, 00:30:09.707 "zerocopy_threshold": 0, 00:30:09.707 "tls_version": 0, 00:30:09.707 "enable_ktls": false 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "sock_impl_set_options", 00:30:09.707 "params": { 00:30:09.707 "impl_name": "posix", 00:30:09.707 "recv_buf_size": 2097152, 00:30:09.707 "send_buf_size": 2097152, 00:30:09.707 "enable_recv_pipe": true, 00:30:09.707 "enable_quickack": false, 00:30:09.707 "enable_placement_id": 0, 00:30:09.707 "enable_zerocopy_send_server": true, 00:30:09.707 "enable_zerocopy_send_client": false, 00:30:09.707 "zerocopy_threshold": 0, 00:30:09.707 "tls_version": 0, 00:30:09.707 "enable_ktls": false 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "sock_impl_set_options", 00:30:09.707 "params": { 00:30:09.707 "impl_name": "uring", 00:30:09.707 "recv_buf_size": 2097152, 00:30:09.707 "send_buf_size": 2097152, 00:30:09.707 "enable_recv_pipe": true, 00:30:09.707 "enable_quickack": false, 00:30:09.707 "enable_placement_id": 0, 00:30:09.707 "enable_zerocopy_send_server": false, 00:30:09.707 "enable_zerocopy_send_client": false, 00:30:09.707 "zerocopy_threshold": 0, 00:30:09.707 "tls_version": 0, 00:30:09.707 "enable_ktls": false 00:30:09.707 } 00:30:09.707 } 00:30:09.707 ] 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "subsystem": "vmd", 00:30:09.707 "config": [] 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "subsystem": "accel", 00:30:09.707 "config": [ 00:30:09.707 { 00:30:09.707 "method": "accel_set_options", 00:30:09.707 "params": { 00:30:09.707 "small_cache_size": 128, 00:30:09.707 "large_cache_size": 16, 00:30:09.707 "task_count": 2048, 00:30:09.707 "sequence_count": 2048, 00:30:09.707 "buf_count": 2048 00:30:09.707 } 00:30:09.707 } 00:30:09.707 ] 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "subsystem": "bdev", 00:30:09.707 "config": [ 00:30:09.707 { 00:30:09.707 "method": "bdev_set_options", 00:30:09.707 "params": { 00:30:09.707 "bdev_io_pool_size": 65535, 00:30:09.707 "bdev_io_cache_size": 256, 00:30:09.707 "bdev_auto_examine": true, 00:30:09.707 "iobuf_small_cache_size": 128, 00:30:09.707 "iobuf_large_cache_size": 16 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "bdev_raid_set_options", 00:30:09.707 "params": { 00:30:09.707 "process_window_size_kb": 1024, 00:30:09.707 "process_max_bandwidth_mb_sec": 0 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "bdev_iscsi_set_options", 00:30:09.707 "params": { 00:30:09.707 "timeout_sec": 30 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "bdev_nvme_set_options", 00:30:09.707 "params": { 00:30:09.707 "action_on_timeout": "none", 00:30:09.707 "timeout_us": 0, 00:30:09.707 "timeout_admin_us": 0, 00:30:09.707 "keep_alive_timeout_ms": 10000, 00:30:09.707 "arbitration_burst": 0, 00:30:09.707 "low_priority_weight": 0, 00:30:09.707 "medium_priority_weight": 0, 00:30:09.707 "high_priority_weight": 0, 00:30:09.707 "nvme_adminq_poll_period_us": 10000, 00:30:09.707 "nvme_ioq_poll_period_us": 0, 00:30:09.707 "io_queue_requests": 512, 00:30:09.707 "delay_cmd_submit": true, 00:30:09.707 "transport_retry_count": 4, 00:30:09.707 "bdev_retry_count": 3, 00:30:09.707 "transport_ack_timeout": 0, 00:30:09.707 "ctrlr_loss_timeout_sec": 0, 00:30:09.707 "reconnect_delay_sec": 0, 00:30:09.707 "fast_io_fail_timeout_sec": 0, 00:30:09.707 "disable_auto_failback": false, 00:30:09.707 "generate_uuids": false, 00:30:09.707 "transport_tos": 0, 00:30:09.707 "nvme_error_stat": false, 00:30:09.707 "rdma_srq_size": 0, 00:30:09.707 "io_path_stat": false, 00:30:09.707 "allow_accel_sequence": false, 00:30:09.707 "rdma_max_cq_size": 0, 00:30:09.707 "rdma_cm_event_timeout_ms": 0, 00:30:09.707 "dhchap_digests": [ 00:30:09.707 "sha256", 00:30:09.707 "sha384", 00:30:09.707 "sha512" 00:30:09.707 ], 00:30:09.707 "dhchap_dhgroups": [ 00:30:09.707 "null", 00:30:09.707 "ffdhe2048", 00:30:09.707 "ffdhe3072", 00:30:09.707 "ffdhe4096", 00:30:09.707 "ffdhe6144", 00:30:09.707 "ffdhe8192" 00:30:09.707 ] 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "bdev_nvme_attach_controller", 00:30:09.707 "params": { 00:30:09.707 "name": "nvme0", 00:30:09.707 "trtype": "TCP", 00:30:09.707 "adrfam": "IPv4", 00:30:09.707 "traddr": "10.0.0.2", 00:30:09.707 "trsvcid": "4420", 00:30:09.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:09.707 "prchk_reftag": false, 00:30:09.707 "prchk_guard": false, 00:30:09.707 "ctrlr_loss_timeout_sec": 0, 00:30:09.707 "reconnect_delay_sec": 0, 00:30:09.707 "fast_io_fail_timeout_sec": 0, 00:30:09.707 "psk": "key0", 00:30:09.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:09.707 "hdgst": false, 00:30:09.707 "ddgst": false 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "bdev_nvme_set_hotplug", 00:30:09.707 "params": { 00:30:09.707 "period_us": 100000, 00:30:09.707 "enable": false 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "bdev_enable_histogram", 00:30:09.707 "params": { 00:30:09.707 "name": "nvme0n1", 00:30:09.707 "enable": true 00:30:09.707 } 00:30:09.707 }, 00:30:09.707 { 00:30:09.707 "method": "bdev_wait_for_examine" 00:30:09.707 } 00:30:09.707 ] 00:30:09.708 }, 00:30:09.708 { 00:30:09.708 "subsystem": "nbd", 00:30:09.708 "config": [] 00:30:09.708 } 00:30:09.708 ] 00:30:09.708 }' 00:30:09.708 [2024-07-22 17:08:11.284201] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:09.708 [2024-07-22 17:08:11.284717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79009 ] 00:30:09.966 [2024-07-22 17:08:11.475913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.223 [2024-07-22 17:08:11.785124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.790 [2024-07-22 17:08:12.137219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:10.790 [2024-07-22 17:08:12.283763] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:11.049 17:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:11.049 17:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:11.049 17:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:11.049 17:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:30:11.309 17:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.309 17:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:11.309 Running I/O for 1 seconds... 00:30:12.683 00:30:12.683 Latency(us) 00:30:12.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.683 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:12.683 Verification LBA range: start 0x0 length 0x2000 00:30:12.683 nvme0n1 : 1.02 4141.18 16.18 0.00 0.00 30583.12 10111.27 21720.50 00:30:12.683 =================================================================================================================== 00:30:12.683 Total : 4141.18 16.18 0.00 0.00 30583.12 10111.27 21720.50 00:30:12.683 0 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:12.683 nvmf_trace.0 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 79009 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 79009 ']' 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 79009 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:12.683 17:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79009 00:30:12.683 killing process with pid 79009 00:30:12.683 Received shutdown signal, test time was about 1.000000 seconds 00:30:12.683 00:30:12.683 Latency(us) 00:30:12.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.683 =================================================================================================================== 00:30:12.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.683 17:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:12.683 17:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:12.683 17:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79009' 00:30:12.683 17:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 79009 00:30:12.683 17:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 79009 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:14.057 rmmod nvme_tcp 00:30:14.057 rmmod nvme_fabrics 00:30:14.057 rmmod nvme_keyring 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 78977 ']' 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 78977 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78977 ']' 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78977 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78977 00:30:14.057 killing process with pid 78977 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78977' 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78977 00:30:14.057 17:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78977 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.mVn1BPvNeJ /tmp/tmp.dwVRGQ59jr /tmp/tmp.9FRkMzcmOC 00:30:15.958 ************************************ 00:30:15.958 END TEST nvmf_tls 00:30:15.958 ************************************ 00:30:15.958 00:30:15.958 real 1m55.638s 00:30:15.958 user 3m1.614s 00:30:15.958 sys 0m29.865s 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:15.958 ************************************ 00:30:15.958 START TEST nvmf_fips 00:30:15.958 ************************************ 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:15.958 * Looking for test storage... 00:30:15.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.958 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:30:15.959 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:30:15.960 Error setting digest 00:30:15.960 0072773FFD7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:30:15.960 0072773FFD7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:15.960 Cannot find device "nvmf_tgt_br" 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:15.960 Cannot find device "nvmf_tgt_br2" 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:15.960 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:16.219 Cannot find device "nvmf_tgt_br" 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:16.219 Cannot find device "nvmf_tgt_br2" 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:16.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:16.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:16.219 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:16.478 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:16.478 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:16.478 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:16.478 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:16.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:30:16.478 00:30:16.478 --- 10.0.0.2 ping statistics --- 00:30:16.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.478 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:30:16.478 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:16.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:16.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:30:16.478 00:30:16.478 --- 10.0.0.3 ping statistics --- 00:30:16.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.479 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:16.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:30:16.479 00:30:16.479 --- 10.0.0.1 ping statistics --- 00:30:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.479 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=79308 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 79308 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 79308 ']' 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.479 17:08:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 [2024-07-22 17:08:18.061306] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:16.479 [2024-07-22 17:08:18.061449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.741 [2024-07-22 17:08:18.236857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.004 [2024-07-22 17:08:18.515987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.004 [2024-07-22 17:08:18.516060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.004 [2024-07-22 17:08:18.516080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.004 [2024-07-22 17:08:18.516108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.004 [2024-07-22 17:08:18.516120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.004 [2024-07-22 17:08:18.516174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.270 [2024-07-22 17:08:18.784080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:17.539 17:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.539 17:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:30:17.539 17:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:17.539 17:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:17.539 17:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:17.539 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:17.811 [2024-07-22 17:08:19.299037] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.811 [2024-07-22 17:08:19.315044] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:17.811 [2024-07-22 17:08:19.315347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.811 [2024-07-22 17:08:19.397321] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:17.811 malloc0 00:30:17.811 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:17.811 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=79342 00:30:17.811 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:17.811 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 79342 /var/tmp/bdevperf.sock 00:30:18.085 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 79342 ']' 00:30:18.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.085 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.085 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.085 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.085 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.085 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:18.085 [2024-07-22 17:08:19.603306] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:18.085 [2024-07-22 17:08:19.603487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79342 ] 00:30:18.360 [2024-07-22 17:08:19.785493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.622 [2024-07-22 17:08:20.060224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.881 [2024-07-22 17:08:20.345000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:19.140 17:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.140 17:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:30:19.140 17:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:19.140 [2024-07-22 17:08:20.748881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:19.140 [2024-07-22 17:08:20.749066] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:19.399 TLSTESTn1 00:30:19.399 17:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:19.399 Running I/O for 10 seconds... 00:30:29.394 00:30:29.394 Latency(us) 00:30:29.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.394 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:29.394 Verification LBA range: start 0x0 length 0x2000 00:30:29.394 TLSTESTn1 : 10.02 3956.22 15.45 0.00 0.00 32295.59 6928.09 30208.98 00:30:29.394 =================================================================================================================== 00:30:29.394 Total : 3956.22 15.45 0.00 0.00 32295.59 6928.09 30208.98 00:30:29.394 0 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:29.652 nvmf_trace.0 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 79342 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 79342 ']' 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 79342 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79342 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79342' 00:30:29.652 killing process with pid 79342 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 79342 00:30:29.652 Received shutdown signal, test time was about 10.000000 seconds 00:30:29.652 00:30:29.652 Latency(us) 00:30:29.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.652 =================================================================================================================== 00:30:29.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.652 [2024-07-22 17:08:31.158285] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:29.652 17:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 79342 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:31.557 rmmod nvme_tcp 00:30:31.557 rmmod nvme_fabrics 00:30:31.557 rmmod nvme_keyring 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 79308 ']' 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 79308 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 79308 ']' 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 79308 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79308 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:31.557 killing process with pid 79308 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79308' 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 79308 00:30:31.557 [2024-07-22 17:08:32.787579] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:31.557 17:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 79308 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:32.937 00:30:32.937 real 0m17.230s 00:30:32.937 user 0m24.017s 00:30:32.937 sys 0m5.820s 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.937 ************************************ 00:30:32.937 END TEST nvmf_fips 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:32.937 ************************************ 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:32.937 ************************************ 00:30:32.937 START TEST nvmf_fuzz 00:30:32.937 ************************************ 00:30:32.937 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:30:33.196 * Looking for test storage... 00:30:33.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.196 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:33.197 Cannot find device "nvmf_tgt_br" 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:33.197 Cannot find device "nvmf_tgt_br2" 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:33.197 Cannot find device "nvmf_tgt_br" 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:33.197 Cannot find device "nvmf_tgt_br2" 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:33.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:33.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:33.197 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:33.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:30:33.456 00:30:33.456 --- 10.0.0.2 ping statistics --- 00:30:33.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.456 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:33.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:33.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:30:33.456 00:30:33.456 --- 10.0.0.3 ping statistics --- 00:30:33.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.456 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:30:33.456 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:33.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:30:33.456 00:30:33.456 --- 10.0.0.1 ping statistics --- 00:30:33.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.456 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=79704 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 79704 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 79704 ']' 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:33.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:33.457 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.395 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:34.653 Malloc0 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:30:34.653 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:30:35.586 Shutting down the fuzz application 00:30:35.586 17:08:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:30:37.009 Shutting down the fuzz application 00:30:37.009 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.009 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.009 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.010 rmmod nvme_tcp 00:30:37.010 rmmod nvme_fabrics 00:30:37.010 rmmod nvme_keyring 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 79704 ']' 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 79704 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 79704 ']' 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 79704 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79704 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:37.010 killing process with pid 79704 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79704' 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 79704 00:30:37.010 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 79704 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:30:38.930 00:30:38.930 real 0m5.624s 00:30:38.930 user 0m6.804s 00:30:38.930 sys 0m0.947s 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:38.930 ************************************ 00:30:38.930 END TEST nvmf_fuzz 00:30:38.930 ************************************ 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:38.930 ************************************ 00:30:38.930 START TEST nvmf_multiconnection 00:30:38.930 ************************************ 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:30:38.930 * Looking for test storage... 00:30:38.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:38.930 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:38.931 Cannot find device "nvmf_tgt_br" 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:38.931 Cannot find device "nvmf_tgt_br2" 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:38.931 Cannot find device "nvmf_tgt_br" 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:38.931 Cannot find device "nvmf_tgt_br2" 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:38.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:38.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:38.931 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:39.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:30:39.189 00:30:39.189 --- 10.0.0.2 ping statistics --- 00:30:39.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.189 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:39.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:39.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:30:39.189 00:30:39.189 --- 10.0.0.3 ping statistics --- 00:30:39.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.189 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:39.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:30:39.189 00:30:39.189 --- 10.0.0.1 ping statistics --- 00:30:39.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.189 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=79963 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 79963 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 79963 ']' 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:39.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:39.189 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:39.189 [2024-07-22 17:08:40.764181] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:39.189 [2024-07-22 17:08:40.764328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.448 [2024-07-22 17:08:40.935086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:39.706 [2024-07-22 17:08:41.267041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.706 [2024-07-22 17:08:41.267115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.706 [2024-07-22 17:08:41.267135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.706 [2024-07-22 17:08:41.267156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.706 [2024-07-22 17:08:41.267176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.706 [2024-07-22 17:08:41.267456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.706 [2024-07-22 17:08:41.267626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.706 [2024-07-22 17:08:41.268410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.706 [2024-07-22 17:08:41.268420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.963 [2024-07-22 17:08:41.554766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.222 [2024-07-22 17:08:41.790187] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.222 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.480 Malloc1 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.480 [2024-07-22 17:08:41.944623] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.480 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:40.481 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:30:40.481 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.481 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.481 Malloc2 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.481 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.739 Malloc3 00:30:40.739 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.739 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:30:40.739 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.739 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.739 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.740 Malloc4 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.740 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 Malloc5 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 Malloc6 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.998 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 Malloc7 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 Malloc8 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.257 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 Malloc9 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 Malloc10 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.515 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.773 Malloc11 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:30:41.773 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:41.774 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:41.774 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:30:41.774 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:41.774 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:41.774 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:41.774 17:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:44.303 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:46.240 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:48.142 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:30:48.400 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:30:48.400 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:48.400 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:48.400 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:48.400 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:50.298 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:30:50.556 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:30:50.556 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:50.556 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:50.556 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:50.556 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:52.455 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:30:52.713 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:30:52.714 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:52.714 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:52.714 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:52.714 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:54.622 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:30:54.880 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:30:54.880 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:54.880 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:54.880 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:54.880 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:56.780 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:30:57.038 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:30:57.038 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:57.038 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:57.038 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:57.038 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:58.954 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:30:59.212 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:30:59.212 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:30:59.212 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:59.212 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:59.212 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:01.117 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:31:01.375 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:31:01.376 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:01.376 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:01.376 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:01.376 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:03.276 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:03.276 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:03.276 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:31:03.563 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:03.563 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:03.563 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:03.563 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:03.563 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:31:03.563 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:31:03.563 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:03.563 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:03.563 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:03.563 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:05.479 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:05.479 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:05.479 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:31:05.738 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:05.738 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:05.738 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:05.738 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:31:05.738 [global] 00:31:05.738 thread=1 00:31:05.738 invalidate=1 00:31:05.738 rw=read 00:31:05.738 time_based=1 00:31:05.738 runtime=10 00:31:05.738 ioengine=libaio 00:31:05.738 direct=1 00:31:05.738 bs=262144 00:31:05.738 iodepth=64 00:31:05.738 norandommap=1 00:31:05.738 numjobs=1 00:31:05.738 00:31:05.738 [job0] 00:31:05.738 filename=/dev/nvme0n1 00:31:05.738 [job1] 00:31:05.738 filename=/dev/nvme10n1 00:31:05.738 [job2] 00:31:05.738 filename=/dev/nvme1n1 00:31:05.738 [job3] 00:31:05.738 filename=/dev/nvme2n1 00:31:05.738 [job4] 00:31:05.738 filename=/dev/nvme3n1 00:31:05.738 [job5] 00:31:05.738 filename=/dev/nvme4n1 00:31:05.738 [job6] 00:31:05.738 filename=/dev/nvme5n1 00:31:05.738 [job7] 00:31:05.738 filename=/dev/nvme6n1 00:31:05.738 [job8] 00:31:05.738 filename=/dev/nvme7n1 00:31:05.738 [job9] 00:31:05.738 filename=/dev/nvme8n1 00:31:05.738 [job10] 00:31:05.738 filename=/dev/nvme9n1 00:31:05.738 Could not set queue depth (nvme0n1) 00:31:05.738 Could not set queue depth (nvme10n1) 00:31:05.738 Could not set queue depth (nvme1n1) 00:31:05.738 Could not set queue depth (nvme2n1) 00:31:05.738 Could not set queue depth (nvme3n1) 00:31:05.738 Could not set queue depth (nvme4n1) 00:31:05.738 Could not set queue depth (nvme5n1) 00:31:05.738 Could not set queue depth (nvme6n1) 00:31:05.738 Could not set queue depth (nvme7n1) 00:31:05.738 Could not set queue depth (nvme8n1) 00:31:05.738 Could not set queue depth (nvme9n1) 00:31:05.997 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:05.997 fio-3.35 00:31:05.997 Starting 11 threads 00:31:18.241 00:31:18.241 job0: (groupid=0, jobs=1): err= 0: pid=80425: Mon Jul 22 17:09:17 2024 00:31:18.241 read: IOPS=681, BW=170MiB/s (179MB/s)(1719MiB/10088msec) 00:31:18.241 slat (usec): min=18, max=25494, avg=1451.90, stdev=3149.89 00:31:18.241 clat (msec): min=11, max=181, avg=92.28, stdev=18.51 00:31:18.241 lat (msec): min=15, max=181, avg=93.73, stdev=18.74 00:31:18.241 clat percentiles (msec): 00:31:18.241 | 1.00th=[ 40], 5.00th=[ 65], 10.00th=[ 69], 20.00th=[ 75], 00:31:18.241 | 30.00th=[ 81], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 101], 00:31:18.241 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 116], 00:31:18.241 | 99.00th=[ 125], 99.50th=[ 134], 99.90th=[ 178], 99.95th=[ 182], 00:31:18.241 | 99.99th=[ 182] 00:31:18.241 bw ( KiB/s): min=144253, max=223232, per=9.22%, avg=174361.55, stdev=29694.96, samples=20 00:31:18.241 iops : min= 563, max= 872, avg=681.00, stdev=116.00, samples=20 00:31:18.241 lat (msec) : 20=0.20%, 50=1.05%, 100=57.95%, 250=40.80% 00:31:18.241 cpu : usr=0.37%, sys=2.58%, ctx=1568, majf=0, minf=4097 00:31:18.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:18.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.241 issued rwts: total=6877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.241 job1: (groupid=0, jobs=1): err= 0: pid=80426: Mon Jul 22 17:09:17 2024 00:31:18.241 read: IOPS=971, BW=243MiB/s (255MB/s)(2433MiB/10015msec) 00:31:18.241 slat (usec): min=23, max=25521, avg=1023.20, stdev=2294.50 00:31:18.241 clat (usec): min=13059, max=91475, avg=64708.72, stdev=6122.12 00:31:18.241 lat (usec): min=16150, max=91526, avg=65731.92, stdev=6099.59 00:31:18.241 clat percentiles (usec): 00:31:18.241 | 1.00th=[46400], 5.00th=[56361], 10.00th=[58983], 20.00th=[61080], 00:31:18.241 | 30.00th=[62129], 40.00th=[63701], 50.00th=[64750], 60.00th=[65799], 00:31:18.241 | 70.00th=[66847], 80.00th=[68682], 90.00th=[71828], 95.00th=[73925], 00:31:18.241 | 99.00th=[80217], 99.50th=[84411], 99.90th=[90702], 99.95th=[90702], 00:31:18.241 | 99.99th=[91751] 00:31:18.241 bw ( KiB/s): min=213931, max=260096, per=13.09%, avg=247497.90, stdev=10855.36, samples=20 00:31:18.241 iops : min= 835, max= 1016, avg=966.75, stdev=42.51, samples=20 00:31:18.241 lat (msec) : 20=0.05%, 50=1.24%, 100=98.71% 00:31:18.241 cpu : usr=0.38%, sys=3.61%, ctx=2051, majf=0, minf=4097 00:31:18.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:18.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.241 issued rwts: total=9733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.241 job2: (groupid=0, jobs=1): err= 0: pid=80427: Mon Jul 22 17:09:17 2024 00:31:18.241 read: IOPS=557, BW=139MiB/s (146MB/s)(1406MiB/10090msec) 00:31:18.241 slat (usec): min=18, max=93311, avg=1751.47, stdev=4147.19 00:31:18.241 clat (msec): min=34, max=182, avg=112.89, stdev=18.03 00:31:18.241 lat (msec): min=34, max=229, avg=114.64, stdev=18.42 00:31:18.241 clat percentiles (msec): 00:31:18.241 | 1.00th=[ 58], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 100], 00:31:18.241 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 113], 00:31:18.241 | 70.00th=[ 122], 80.00th=[ 132], 90.00th=[ 138], 95.00th=[ 144], 00:31:18.241 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 182], 00:31:18.241 | 99.99th=[ 184] 00:31:18.241 bw ( KiB/s): min=115200, max=162304, per=7.52%, avg=142314.65, stdev=17420.43, samples=20 00:31:18.241 iops : min= 450, max= 634, avg=555.80, stdev=67.95, samples=20 00:31:18.241 lat (msec) : 50=0.20%, 100=21.73%, 250=78.08% 00:31:18.242 cpu : usr=0.25%, sys=2.15%, ctx=1392, majf=0, minf=4097 00:31:18.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.242 issued rwts: total=5624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.242 job3: (groupid=0, jobs=1): err= 0: pid=80428: Mon Jul 22 17:09:17 2024 00:31:18.242 read: IOPS=615, BW=154MiB/s (161MB/s)(1551MiB/10073msec) 00:31:18.242 slat (usec): min=17, max=68808, avg=1606.72, stdev=3426.86 00:31:18.242 clat (msec): min=69, max=197, avg=102.25, stdev= 8.72 00:31:18.242 lat (msec): min=79, max=197, avg=103.85, stdev= 8.74 00:31:18.242 clat percentiles (msec): 00:31:18.242 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:31:18.242 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 104], 00:31:18.242 | 70.00th=[ 106], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 115], 00:31:18.242 | 99.00th=[ 122], 99.50th=[ 140], 99.90th=[ 186], 99.95th=[ 194], 00:31:18.242 | 99.99th=[ 199] 00:31:18.242 bw ( KiB/s): min=143360, max=165376, per=8.31%, avg=157184.00, stdev=5790.24, samples=20 00:31:18.242 iops : min= 560, max= 646, avg=614.00, stdev=22.62, samples=20 00:31:18.242 lat (msec) : 100=40.72%, 250=59.28% 00:31:18.242 cpu : usr=0.34%, sys=2.68%, ctx=1560, majf=0, minf=4097 00:31:18.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.242 issued rwts: total=6203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.242 job4: (groupid=0, jobs=1): err= 0: pid=80429: Mon Jul 22 17:09:17 2024 00:31:18.242 read: IOPS=569, BW=142MiB/s (149MB/s)(1434MiB/10078msec) 00:31:18.242 slat (usec): min=16, max=52563, avg=1743.01, stdev=3958.47 00:31:18.242 clat (msec): min=53, max=181, avg=110.51, stdev=17.54 00:31:18.242 lat (msec): min=62, max=181, avg=112.25, stdev=17.83 00:31:18.242 clat percentiles (msec): 00:31:18.242 | 1.00th=[ 83], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 97], 00:31:18.242 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 109], 00:31:18.242 | 70.00th=[ 117], 80.00th=[ 131], 90.00th=[ 138], 95.00th=[ 140], 00:31:18.242 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 182], 00:31:18.242 | 99.99th=[ 182] 00:31:18.242 bw ( KiB/s): min=111616, max=166400, per=7.67%, avg=145087.65, stdev=20512.72, samples=20 00:31:18.242 iops : min= 436, max= 650, avg=566.70, stdev=80.10, samples=20 00:31:18.242 lat (msec) : 100=33.64%, 250=66.36% 00:31:18.242 cpu : usr=0.19%, sys=2.34%, ctx=1317, majf=0, minf=4097 00:31:18.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.242 issued rwts: total=5735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.242 job5: (groupid=0, jobs=1): err= 0: pid=80430: Mon Jul 22 17:09:17 2024 00:31:18.242 read: IOPS=971, BW=243MiB/s (255MB/s)(2431MiB/10013msec) 00:31:18.242 slat (usec): min=15, max=52602, avg=1020.14, stdev=2369.13 00:31:18.242 clat (msec): min=10, max=105, avg=64.80, stdev= 6.18 00:31:18.242 lat (msec): min=11, max=105, avg=65.82, stdev= 6.14 00:31:18.242 clat percentiles (msec): 00:31:18.242 | 1.00th=[ 50], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 62], 00:31:18.242 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:31:18.242 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 74], 00:31:18.242 | 99.00th=[ 81], 99.50th=[ 84], 99.90th=[ 100], 99.95th=[ 100], 00:31:18.242 | 99.99th=[ 106] 00:31:18.242 bw ( KiB/s): min=207360, max=263168, per=13.10%, avg=247863.05, stdev=11959.85, samples=19 00:31:18.242 iops : min= 810, max= 1028, avg=968.16, stdev=46.68, samples=19 00:31:18.242 lat (msec) : 20=0.23%, 50=0.87%, 100=98.86%, 250=0.04% 00:31:18.242 cpu : usr=0.39%, sys=3.33%, ctx=2048, majf=0, minf=4097 00:31:18.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.242 issued rwts: total=9725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.242 job6: (groupid=0, jobs=1): err= 0: pid=80431: Mon Jul 22 17:09:17 2024 00:31:18.242 read: IOPS=616, BW=154MiB/s (162MB/s)(1556MiB/10094msec) 00:31:18.242 slat (usec): min=15, max=58480, avg=1603.75, stdev=3507.30 00:31:18.242 clat (msec): min=12, max=195, avg=102.05, stdev=10.87 00:31:18.242 lat (msec): min=13, max=200, avg=103.66, stdev=10.97 00:31:18.242 clat percentiles (msec): 00:31:18.242 | 1.00th=[ 50], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 96], 00:31:18.242 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 105], 00:31:18.242 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 116], 00:31:18.242 | 99.00th=[ 125], 99.50th=[ 140], 99.90th=[ 180], 99.95th=[ 192], 00:31:18.242 | 99.99th=[ 197] 00:31:18.242 bw ( KiB/s): min=152576, max=164864, per=8.33%, avg=157633.80, stdev=3583.10, samples=20 00:31:18.242 iops : min= 596, max= 644, avg=615.60, stdev=14.11, samples=20 00:31:18.242 lat (msec) : 20=0.11%, 50=0.98%, 100=37.47%, 250=61.43% 00:31:18.242 cpu : usr=0.22%, sys=2.28%, ctx=1525, majf=0, minf=4097 00:31:18.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.242 issued rwts: total=6223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.242 job7: (groupid=0, jobs=1): err= 0: pid=80432: Mon Jul 22 17:09:17 2024 00:31:18.242 read: IOPS=615, BW=154MiB/s (161MB/s)(1553MiB/10089msec) 00:31:18.242 slat (usec): min=18, max=32297, avg=1605.44, stdev=3432.18 00:31:18.242 clat (msec): min=19, max=193, avg=102.21, stdev= 8.85 00:31:18.242 lat (msec): min=20, max=193, avg=103.81, stdev= 8.91 00:31:18.242 clat percentiles (msec): 00:31:18.242 | 1.00th=[ 85], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 96], 00:31:18.242 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:31:18.242 | 70.00th=[ 106], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:31:18.242 | 99.00th=[ 122], 99.50th=[ 136], 99.90th=[ 178], 99.95th=[ 190], 00:31:18.242 | 99.99th=[ 194] 00:31:18.242 bw ( KiB/s): min=147238, max=167424, per=8.32%, avg=157336.55, stdev=5333.97, samples=20 00:31:18.242 iops : min= 575, max= 654, avg=614.50, stdev=20.86, samples=20 00:31:18.242 lat (msec) : 20=0.02%, 50=0.13%, 100=38.67%, 250=61.19% 00:31:18.242 cpu : usr=0.26%, sys=2.69%, ctx=1449, majf=0, minf=4097 00:31:18.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.242 issued rwts: total=6212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.242 job8: (groupid=0, jobs=1): err= 0: pid=80433: Mon Jul 22 17:09:17 2024 00:31:18.242 read: IOPS=680, BW=170MiB/s (178MB/s)(1715MiB/10076msec) 00:31:18.242 slat (usec): min=16, max=25447, avg=1445.67, stdev=3098.15 00:31:18.242 clat (msec): min=11, max=174, avg=92.43, stdev=16.94 00:31:18.242 lat (msec): min=11, max=174, avg=93.88, stdev=17.18 00:31:18.242 clat percentiles (msec): 00:31:18.242 | 1.00th=[ 57], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 75], 00:31:18.242 | 30.00th=[ 81], 40.00th=[ 90], 50.00th=[ 97], 60.00th=[ 101], 00:31:18.242 | 70.00th=[ 104], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 115], 00:31:18.242 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 167], 99.95th=[ 176], 00:31:18.242 | 99.99th=[ 176] 00:31:18.242 bw ( KiB/s): min=145920, max=226304, per=9.20%, avg=173952.00, stdev=27463.18, samples=20 00:31:18.242 iops : min= 570, max= 884, avg=679.50, stdev=107.28, samples=20 00:31:18.242 lat (msec) : 20=0.15%, 50=0.32%, 100=58.75%, 250=40.78% 00:31:18.242 cpu : usr=0.27%, sys=2.32%, ctx=1626, majf=0, minf=4097 00:31:18.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:18.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.242 issued rwts: total=6859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.242 job9: (groupid=0, jobs=1): err= 0: pid=80434: Mon Jul 22 17:09:17 2024 00:31:18.242 read: IOPS=564, BW=141MiB/s (148MB/s)(1422MiB/10075msec) 00:31:18.243 slat (usec): min=23, max=77715, avg=1754.96, stdev=4228.53 00:31:18.243 clat (msec): min=34, max=176, avg=111.41, stdev=17.83 00:31:18.243 lat (msec): min=35, max=216, avg=113.16, stdev=18.17 00:31:18.243 clat percentiles (msec): 00:31:18.243 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 96], 00:31:18.243 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 111], 00:31:18.243 | 70.00th=[ 124], 80.00th=[ 132], 90.00th=[ 138], 95.00th=[ 142], 00:31:18.243 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 176], 99.95th=[ 176], 00:31:18.243 | 99.99th=[ 178] 00:31:18.243 bw ( KiB/s): min=100040, max=169472, per=7.61%, avg=143968.05, stdev=20915.23, samples=20 00:31:18.243 iops : min= 390, max= 662, avg=562.25, stdev=81.81, samples=20 00:31:18.243 lat (msec) : 50=0.11%, 100=31.55%, 250=68.34% 00:31:18.243 cpu : usr=0.27%, sys=2.12%, ctx=1385, majf=0, minf=4097 00:31:18.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:18.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.243 issued rwts: total=5689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.243 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.243 job10: (groupid=0, jobs=1): err= 0: pid=80435: Mon Jul 22 17:09:17 2024 00:31:18.243 read: IOPS=565, BW=141MiB/s (148MB/s)(1425MiB/10079msec) 00:31:18.243 slat (usec): min=19, max=45500, avg=1748.92, stdev=3853.81 00:31:18.243 clat (msec): min=47, max=197, avg=111.30, stdev=17.77 00:31:18.243 lat (msec): min=47, max=201, avg=113.05, stdev=18.05 00:31:18.243 clat percentiles (msec): 00:31:18.243 | 1.00th=[ 83], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 97], 00:31:18.243 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 107], 60.00th=[ 111], 00:31:18.243 | 70.00th=[ 123], 80.00th=[ 131], 90.00th=[ 136], 95.00th=[ 140], 00:31:18.243 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 192], 99.95th=[ 192], 00:31:18.243 | 99.99th=[ 197] 00:31:18.243 bw ( KiB/s): min=112640, max=172032, per=7.62%, avg=144219.20, stdev=19159.62, samples=20 00:31:18.243 iops : min= 440, max= 672, avg=563.20, stdev=74.74, samples=20 00:31:18.243 lat (msec) : 50=0.09%, 100=30.85%, 250=69.06% 00:31:18.243 cpu : usr=0.36%, sys=2.80%, ctx=1427, majf=0, minf=4097 00:31:18.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:18.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:18.243 issued rwts: total=5699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.243 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:18.243 00:31:18.243 Run status group 0 (all jobs): 00:31:18.243 READ: bw=1847MiB/s (1937MB/s), 139MiB/s-243MiB/s (146MB/s-255MB/s), io=18.2GiB (19.6GB), run=10013-10094msec 00:31:18.243 00:31:18.243 Disk stats (read/write): 00:31:18.243 nvme0n1: ios=13577/0, merge=0/0, ticks=1226340/0, in_queue=1226340, util=97.31% 00:31:18.243 nvme10n1: ios=19259/0, merge=0/0, ticks=1231815/0, in_queue=1231815, util=97.51% 00:31:18.243 nvme1n1: ios=11075/0, merge=0/0, ticks=1226090/0, in_queue=1226090, util=97.80% 00:31:18.243 nvme2n1: ios=12237/0, merge=0/0, ticks=1225375/0, in_queue=1225375, util=97.77% 00:31:18.243 nvme3n1: ios=11301/0, merge=0/0, ticks=1224655/0, in_queue=1224655, util=97.90% 00:31:18.243 nvme4n1: ios=19270/0, merge=0/0, ticks=1233047/0, in_queue=1233047, util=98.30% 00:31:18.243 nvme5n1: ios=12270/0, merge=0/0, ticks=1225800/0, in_queue=1225800, util=98.45% 00:31:18.243 nvme6n1: ios=12249/0, merge=0/0, ticks=1226065/0, in_queue=1226065, util=98.47% 00:31:18.243 nvme7n1: ios=13529/0, merge=0/0, ticks=1226172/0, in_queue=1226172, util=98.68% 00:31:18.243 nvme8n1: ios=11206/0, merge=0/0, ticks=1225997/0, in_queue=1225997, util=98.88% 00:31:18.243 nvme9n1: ios=11235/0, merge=0/0, ticks=1225142/0, in_queue=1225142, util=99.09% 00:31:18.243 17:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:31:18.243 [global] 00:31:18.243 thread=1 00:31:18.243 invalidate=1 00:31:18.243 rw=randwrite 00:31:18.243 time_based=1 00:31:18.243 runtime=10 00:31:18.243 ioengine=libaio 00:31:18.243 direct=1 00:31:18.243 bs=262144 00:31:18.243 iodepth=64 00:31:18.243 norandommap=1 00:31:18.243 numjobs=1 00:31:18.243 00:31:18.243 [job0] 00:31:18.243 filename=/dev/nvme0n1 00:31:18.243 [job1] 00:31:18.243 filename=/dev/nvme10n1 00:31:18.243 [job2] 00:31:18.243 filename=/dev/nvme1n1 00:31:18.243 [job3] 00:31:18.243 filename=/dev/nvme2n1 00:31:18.243 [job4] 00:31:18.243 filename=/dev/nvme3n1 00:31:18.243 [job5] 00:31:18.243 filename=/dev/nvme4n1 00:31:18.243 [job6] 00:31:18.243 filename=/dev/nvme5n1 00:31:18.243 [job7] 00:31:18.243 filename=/dev/nvme6n1 00:31:18.243 [job8] 00:31:18.243 filename=/dev/nvme7n1 00:31:18.243 [job9] 00:31:18.243 filename=/dev/nvme8n1 00:31:18.243 [job10] 00:31:18.243 filename=/dev/nvme9n1 00:31:18.243 Could not set queue depth (nvme0n1) 00:31:18.243 Could not set queue depth (nvme10n1) 00:31:18.243 Could not set queue depth (nvme1n1) 00:31:18.243 Could not set queue depth (nvme2n1) 00:31:18.243 Could not set queue depth (nvme3n1) 00:31:18.243 Could not set queue depth (nvme4n1) 00:31:18.243 Could not set queue depth (nvme5n1) 00:31:18.243 Could not set queue depth (nvme6n1) 00:31:18.243 Could not set queue depth (nvme7n1) 00:31:18.243 Could not set queue depth (nvme8n1) 00:31:18.243 Could not set queue depth (nvme9n1) 00:31:18.243 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:18.243 fio-3.35 00:31:18.243 Starting 11 threads 00:31:28.233 00:31:28.233 job0: (groupid=0, jobs=1): err= 0: pid=80636: Mon Jul 22 17:09:28 2024 00:31:28.233 write: IOPS=217, BW=54.3MiB/s (56.9MB/s)(557MiB/10267msec); 0 zone resets 00:31:28.233 slat (usec): min=23, max=100014, avg=4479.92, stdev=8412.00 00:31:28.233 clat (msec): min=41, max=568, avg=290.14, stdev=47.14 00:31:28.233 lat (msec): min=41, max=568, avg=294.62, stdev=47.05 00:31:28.233 clat percentiles (msec): 00:31:28.233 | 1.00th=[ 93], 5.00th=[ 255], 10.00th=[ 268], 20.00th=[ 275], 00:31:28.233 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 292], 00:31:28.233 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 359], 00:31:28.233 | 99.00th=[ 451], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 567], 00:31:28.233 | 99.99th=[ 567] 00:31:28.233 bw ( KiB/s): min=43008, max=59904, per=5.03%, avg=55444.65, stdev=3811.59, samples=20 00:31:28.233 iops : min= 168, max= 234, avg=216.55, stdev=14.94, samples=20 00:31:28.233 lat (msec) : 50=0.18%, 100=0.90%, 250=2.33%, 500=95.96%, 750=0.63% 00:31:28.233 cpu : usr=0.62%, sys=0.73%, ctx=2393, majf=0, minf=1 00:31:28.233 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:31:28.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.233 issued rwts: total=0,2229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.233 job1: (groupid=0, jobs=1): err= 0: pid=80637: Mon Jul 22 17:09:28 2024 00:31:28.233 write: IOPS=629, BW=157MiB/s (165MB/s)(1589MiB/10093msec); 0 zone resets 00:31:28.233 slat (usec): min=20, max=19867, avg=1536.66, stdev=2672.88 00:31:28.233 clat (msec): min=12, max=196, avg=100.06, stdev=12.89 00:31:28.233 lat (msec): min=12, max=196, avg=101.60, stdev=12.88 00:31:28.233 clat percentiles (msec): 00:31:28.233 | 1.00th=[ 35], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 96], 00:31:28.233 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 102], 00:31:28.233 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 107], 95.00th=[ 109], 00:31:28.233 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 184], 99.95th=[ 190], 00:31:28.233 | 99.99th=[ 197] 00:31:28.233 bw ( KiB/s): min=151552, max=180736, per=14.61%, avg=161116.15, stdev=6052.74, samples=20 00:31:28.233 iops : min= 592, max= 706, avg=629.35, stdev=23.66, samples=20 00:31:28.233 lat (msec) : 20=0.27%, 50=1.34%, 100=38.86%, 250=59.53% 00:31:28.233 cpu : usr=1.43%, sys=1.30%, ctx=8477, majf=0, minf=1 00:31:28.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:28.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.233 issued rwts: total=0,6356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.233 job2: (groupid=0, jobs=1): err= 0: pid=80649: Mon Jul 22 17:09:28 2024 00:31:28.233 write: IOPS=214, BW=53.6MiB/s (56.3MB/s)(551MiB/10266msec); 0 zone resets 00:31:28.233 slat (usec): min=22, max=103346, avg=4534.71, stdev=8566.76 00:31:28.233 clat (msec): min=110, max=563, avg=293.52, stdev=40.74 00:31:28.233 lat (msec): min=110, max=563, avg=298.06, stdev=40.37 00:31:28.233 clat percentiles (msec): 00:31:28.233 | 1.00th=[ 169], 5.00th=[ 257], 10.00th=[ 266], 20.00th=[ 275], 00:31:28.233 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 292], 00:31:28.233 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 330], 95.00th=[ 372], 00:31:28.233 | 99.00th=[ 447], 99.50th=[ 506], 99.90th=[ 542], 99.95th=[ 567], 00:31:28.233 | 99.99th=[ 567] 00:31:28.233 bw ( KiB/s): min=40960, max=59392, per=4.97%, avg=54776.55, stdev=5007.92, samples=20 00:31:28.233 iops : min= 160, max= 232, avg=213.90, stdev=19.57, samples=20 00:31:28.233 lat (msec) : 250=3.00%, 500=96.37%, 750=0.64% 00:31:28.233 cpu : usr=0.59%, sys=0.69%, ctx=1526, majf=0, minf=1 00:31:28.233 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:31:28.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.233 issued rwts: total=0,2203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.233 job3: (groupid=0, jobs=1): err= 0: pid=80650: Mon Jul 22 17:09:28 2024 00:31:28.233 write: IOPS=220, BW=55.2MiB/s (57.9MB/s)(567MiB/10271msec); 0 zone resets 00:31:28.233 slat (usec): min=20, max=76137, avg=4405.94, stdev=8030.23 00:31:28.234 clat (msec): min=42, max=568, avg=285.25, stdev=43.42 00:31:28.234 lat (msec): min=42, max=568, avg=289.65, stdev=43.29 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 94], 5.00th=[ 255], 10.00th=[ 264], 20.00th=[ 271], 00:31:28.234 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:31:28.234 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 342], 00:31:28.234 | 99.00th=[ 451], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 567], 00:31:28.234 | 99.99th=[ 567] 00:31:28.234 bw ( KiB/s): min=47104, max=59392, per=5.12%, avg=56430.25, stdev=3018.16, samples=20 00:31:28.234 iops : min= 184, max= 232, avg=220.40, stdev=11.76, samples=20 00:31:28.234 lat (msec) : 50=0.18%, 100=0.88%, 250=3.00%, 500=95.33%, 750=0.62% 00:31:28.234 cpu : usr=0.64%, sys=0.52%, ctx=2111, majf=0, minf=1 00:31:28.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.234 issued rwts: total=0,2268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.234 job4: (groupid=0, jobs=1): err= 0: pid=80651: Mon Jul 22 17:09:28 2024 00:31:28.234 write: IOPS=226, BW=56.5MiB/s (59.3MB/s)(580MiB/10268msec); 0 zone resets 00:31:28.234 slat (usec): min=21, max=186453, avg=4128.79, stdev=8279.11 00:31:28.234 clat (msec): min=126, max=563, avg=278.85, stdev=36.70 00:31:28.234 lat (msec): min=126, max=563, avg=282.98, stdev=36.32 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 197], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 262], 00:31:28.234 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:31:28.234 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:31:28.234 | 99.00th=[ 447], 99.50th=[ 506], 99.90th=[ 542], 99.95th=[ 567], 00:31:28.234 | 99.99th=[ 567] 00:31:28.234 bw ( KiB/s): min=39503, max=63488, per=5.24%, avg=57797.05, stdev=5122.44, samples=20 00:31:28.234 iops : min= 154, max= 248, avg=225.70, stdev=20.06, samples=20 00:31:28.234 lat (msec) : 250=7.24%, 500=92.16%, 750=0.60% 00:31:28.234 cpu : usr=0.54%, sys=0.53%, ctx=2618, majf=0, minf=1 00:31:28.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.234 issued rwts: total=0,2321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.234 job5: (groupid=0, jobs=1): err= 0: pid=80652: Mon Jul 22 17:09:28 2024 00:31:28.234 write: IOPS=597, BW=149MiB/s (157MB/s)(1505MiB/10074msec); 0 zone resets 00:31:28.234 slat (usec): min=17, max=90902, avg=1612.77, stdev=4537.89 00:31:28.234 clat (usec): min=1713, max=435786, avg=105442.54, stdev=95029.72 00:31:28.234 lat (msec): min=2, max=435, avg=107.06, stdev=96.43 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 13], 5.00th=[ 51], 10.00th=[ 58], 20.00th=[ 59], 00:31:28.234 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 62], 60.00th=[ 63], 00:31:28.234 | 70.00th=[ 65], 80.00th=[ 99], 90.00th=[ 292], 95.00th=[ 309], 00:31:28.234 | 99.00th=[ 372], 99.50th=[ 414], 99.90th=[ 422], 99.95th=[ 430], 00:31:28.234 | 99.99th=[ 435] 00:31:28.234 bw ( KiB/s): min=40960, max=277504, per=13.82%, avg=152492.90, stdev=104251.22, samples=20 00:31:28.234 iops : min= 160, max= 1084, avg=595.60, stdev=407.20, samples=20 00:31:28.234 lat (msec) : 2=0.02%, 4=0.03%, 10=0.63%, 20=1.26%, 50=3.06% 00:31:28.234 lat (msec) : 100=75.05%, 250=1.96%, 500=17.99% 00:31:28.234 cpu : usr=1.08%, sys=1.30%, ctx=7856, majf=0, minf=1 00:31:28.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.234 issued rwts: total=0,6021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.234 job6: (groupid=0, jobs=1): err= 0: pid=80653: Mon Jul 22 17:09:28 2024 00:31:28.234 write: IOPS=205, BW=51.5MiB/s (54.0MB/s)(529MiB/10275msec); 0 zone resets 00:31:28.234 slat (usec): min=21, max=141583, avg=4724.56, stdev=9488.46 00:31:28.234 clat (msec): min=144, max=565, avg=305.88, stdev=41.46 00:31:28.234 lat (msec): min=144, max=565, avg=310.61, stdev=40.85 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 215], 5.00th=[ 264], 10.00th=[ 271], 20.00th=[ 284], 00:31:28.234 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 305], 00:31:28.234 | 70.00th=[ 309], 80.00th=[ 317], 90.00th=[ 351], 95.00th=[ 397], 00:31:28.234 | 99.00th=[ 468], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 567], 00:31:28.234 | 99.99th=[ 567] 00:31:28.234 bw ( KiB/s): min=34816, max=57458, per=4.76%, avg=52536.90, stdev=5958.21, samples=20 00:31:28.234 iops : min= 136, max= 224, avg=205.20, stdev=23.26, samples=20 00:31:28.234 lat (msec) : 250=2.46%, 500=96.88%, 750=0.66% 00:31:28.234 cpu : usr=0.44%, sys=0.82%, ctx=2725, majf=0, minf=1 00:31:28.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.234 issued rwts: total=0,2116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.234 job7: (groupid=0, jobs=1): err= 0: pid=80654: Mon Jul 22 17:09:28 2024 00:31:28.234 write: IOPS=618, BW=155MiB/s (162MB/s)(1560MiB/10094msec); 0 zone resets 00:31:28.234 slat (usec): min=18, max=18407, avg=1599.22, stdev=2724.14 00:31:28.234 clat (msec): min=20, max=191, avg=101.92, stdev=10.01 00:31:28.234 lat (msec): min=20, max=191, avg=103.52, stdev= 9.78 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 92], 5.00th=[ 94], 10.00th=[ 95], 20.00th=[ 97], 00:31:28.234 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 102], 00:31:28.234 | 70.00th=[ 104], 80.00th=[ 105], 90.00th=[ 107], 95.00th=[ 114], 00:31:28.234 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 180], 99.95th=[ 186], 00:31:28.234 | 99.99th=[ 192] 00:31:28.234 bw ( KiB/s): min=122880, max=165888, per=14.33%, avg=158073.55, stdev=9400.33, samples=20 00:31:28.234 iops : min= 480, max= 648, avg=617.40, stdev=36.71, samples=20 00:31:28.234 lat (msec) : 50=0.32%, 100=38.08%, 250=61.60% 00:31:28.234 cpu : usr=1.18%, sys=1.44%, ctx=8855, majf=0, minf=1 00:31:28.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.234 issued rwts: total=0,6239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.234 job8: (groupid=0, jobs=1): err= 0: pid=80655: Mon Jul 22 17:09:28 2024 00:31:28.234 write: IOPS=418, BW=105MiB/s (110MB/s)(1074MiB/10274msec); 0 zone resets 00:31:28.234 slat (usec): min=21, max=75648, avg=2295.96, stdev=5027.36 00:31:28.234 clat (msec): min=5, max=561, avg=150.64, stdev=90.22 00:31:28.234 lat (msec): min=5, max=561, avg=152.94, stdev=91.39 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 55], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 100], 00:31:28.234 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 105], 00:31:28.234 | 70.00th=[ 110], 80.00th=[ 284], 90.00th=[ 309], 95.00th=[ 326], 00:31:28.234 | 99.00th=[ 359], 99.50th=[ 464], 99.90th=[ 542], 99.95th=[ 542], 00:31:28.234 | 99.99th=[ 558] 00:31:28.234 bw ( KiB/s): min=49152, max=163328, per=9.82%, avg=108349.40, stdev=52248.48, samples=20 00:31:28.234 iops : min= 192, max= 638, avg=423.20, stdev=204.06, samples=20 00:31:28.234 lat (msec) : 10=0.07%, 20=0.23%, 50=0.63%, 100=23.78%, 250=52.55% 00:31:28.234 lat (msec) : 500=22.41%, 750=0.33% 00:31:28.234 cpu : usr=0.96%, sys=0.93%, ctx=5239, majf=0, minf=1 00:31:28.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.234 issued rwts: total=0,4297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.234 job9: (groupid=0, jobs=1): err= 0: pid=80656: Mon Jul 22 17:09:28 2024 00:31:28.234 write: IOPS=206, BW=51.6MiB/s (54.1MB/s)(530MiB/10269msec); 0 zone resets 00:31:28.234 slat (usec): min=22, max=144289, avg=4712.77, stdev=9490.19 00:31:28.234 clat (msec): min=41, max=568, avg=305.05, stdev=47.06 00:31:28.234 lat (msec): min=41, max=568, avg=309.76, stdev=46.65 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 93], 5.00th=[ 266], 10.00th=[ 275], 20.00th=[ 284], 00:31:28.234 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 305], 00:31:28.234 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 388], 00:31:28.234 | 99.00th=[ 472], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 567], 00:31:28.234 | 99.99th=[ 567] 00:31:28.234 bw ( KiB/s): min=36864, max=57344, per=4.77%, avg=52645.15, stdev=4909.17, samples=20 00:31:28.234 iops : min= 144, max= 224, avg=205.60, stdev=19.23, samples=20 00:31:28.234 lat (msec) : 50=0.19%, 100=0.94%, 250=0.94%, 500=97.26%, 750=0.66% 00:31:28.234 cpu : usr=0.57%, sys=0.63%, ctx=1240, majf=0, minf=1 00:31:28.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.234 issued rwts: total=0,2120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.234 job10: (groupid=0, jobs=1): err= 0: pid=80657: Mon Jul 22 17:09:28 2024 00:31:28.234 write: IOPS=805, BW=201MiB/s (211MB/s)(2027MiB/10061msec); 0 zone resets 00:31:28.234 slat (usec): min=17, max=8874, avg=1229.80, stdev=2153.05 00:31:28.234 clat (msec): min=4, max=121, avg=78.17, stdev=19.75 00:31:28.234 lat (msec): min=4, max=121, avg=79.40, stdev=19.97 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 62], 00:31:28.234 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 85], 00:31:28.234 | 70.00th=[ 100], 80.00th=[ 103], 90.00th=[ 105], 95.00th=[ 107], 00:31:28.234 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 114], 99.95th=[ 117], 00:31:28.234 | 99.99th=[ 122] 00:31:28.234 bw ( KiB/s): min=152064, max=265216, per=18.66%, avg=205877.40, stdev=49158.61, samples=20 00:31:28.235 iops : min= 594, max= 1036, avg=804.20, stdev=192.02, samples=20 00:31:28.235 lat (msec) : 10=0.07%, 20=0.10%, 50=0.35%, 100=72.12%, 250=27.36% 00:31:28.235 cpu : usr=1.74%, sys=1.58%, ctx=10725, majf=0, minf=1 00:31:28.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:28.235 issued rwts: total=0,8106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:28.235 00:31:28.235 Run status group 0 (all jobs): 00:31:28.235 WRITE: bw=1077MiB/s (1130MB/s), 51.5MiB/s-201MiB/s (54.0MB/s-211MB/s), io=10.8GiB (11.6GB), run=10061-10275msec 00:31:28.235 00:31:28.235 Disk stats (read/write): 00:31:28.235 nvme0n1: ios=49/4408, merge=0/0, ticks=48/1225716, in_queue=1225764, util=97.49% 00:31:28.235 nvme10n1: ios=49/12491, merge=0/0, ticks=67/1208053, in_queue=1208120, util=97.60% 00:31:28.235 nvme1n1: ios=42/4353, merge=0/0, ticks=62/1225510, in_queue=1225572, util=97.75% 00:31:28.235 nvme2n1: ios=15/4484, merge=0/0, ticks=15/1226275, in_queue=1226290, util=97.73% 00:31:28.235 nvme3n1: ios=15/4587, merge=0/0, ticks=24/1228560, in_queue=1228584, util=97.80% 00:31:28.235 nvme4n1: ios=0/11773, merge=0/0, ticks=0/1209296, in_queue=1209296, util=97.99% 00:31:28.235 nvme5n1: ios=0/4173, merge=0/0, ticks=0/1224159, in_queue=1224159, util=98.11% 00:31:28.235 nvme6n1: ios=0/12248, merge=0/0, ticks=0/1206593, in_queue=1206593, util=98.22% 00:31:28.235 nvme7n1: ios=0/8535, merge=0/0, ticks=0/1225449, in_queue=1225449, util=98.56% 00:31:28.235 nvme8n1: ios=0/4190, merge=0/0, ticks=0/1225664, in_queue=1225664, util=98.80% 00:31:28.235 nvme9n1: ios=0/15944, merge=0/0, ticks=0/1209509, in_queue=1209509, util=98.96% 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:28.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.235 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:31:28.235 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:31:28.235 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:31:28.235 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:31:28.235 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:31:28.235 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.235 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:31:28.236 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:31:28.236 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.236 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:31:28.499 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.499 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:31:28.499 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:31:28.499 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:31:28.499 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:28.758 rmmod nvme_tcp 00:31:28.758 rmmod nvme_fabrics 00:31:28.758 rmmod nvme_keyring 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 79963 ']' 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 79963 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 79963 ']' 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 79963 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79963 00:31:28.758 killing process with pid 79963 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79963' 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 79963 00:31:28.758 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 79963 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:32.970 ************************************ 00:31:32.970 END TEST nvmf_multiconnection 00:31:32.970 ************************************ 00:31:32.970 00:31:32.970 real 0m54.088s 00:31:32.970 user 2m59.783s 00:31:32.970 sys 0m34.424s 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:32.970 ************************************ 00:31:32.970 START TEST nvmf_initiator_timeout 00:31:32.970 ************************************ 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:31:32.970 * Looking for test storage... 00:31:32.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.970 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:32.971 Cannot find device "nvmf_tgt_br" 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:32.971 Cannot find device "nvmf_tgt_br2" 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:32.971 Cannot find device "nvmf_tgt_br" 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:32.971 Cannot find device "nvmf_tgt_br2" 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:32.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:32.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:32.971 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:33.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:31:33.230 00:31:33.230 --- 10.0.0.2 ping statistics --- 00:31:33.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.230 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:33.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:33.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:31:33.230 00:31:33.230 --- 10.0.0.3 ping statistics --- 00:31:33.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.230 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:31:33.230 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:33.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:31:33.231 00:31:33.231 --- 10.0.0.1 ping statistics --- 00:31:33.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.231 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:33.231 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=81077 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 81077 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 81077 ']' 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:33.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:33.489 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:33.489 [2024-07-22 17:09:34.983355] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:33.489 [2024-07-22 17:09:34.983522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.748 [2024-07-22 17:09:35.178129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:34.006 [2024-07-22 17:09:35.538063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.007 [2024-07-22 17:09:35.538134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.007 [2024-07-22 17:09:35.538151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.007 [2024-07-22 17:09:35.538166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.007 [2024-07-22 17:09:35.538180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.007 [2024-07-22 17:09:35.538394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.007 [2024-07-22 17:09:35.538573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:34.007 [2024-07-22 17:09:35.539127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.007 [2024-07-22 17:09:35.539131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.265 [2024-07-22 17:09:35.806331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:34.530 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:34.530 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:31:34.530 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:34.530 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:34.530 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:34.530 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.530 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:31:34.530 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:34.530 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.530 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:34.530 Malloc0 00:31:34.530 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.531 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:31:34.531 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.531 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:34.531 Delay0 00:31:34.531 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.531 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:34.531 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.531 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:34.790 [2024-07-22 17:09:36.151125] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:34.790 [2024-07-22 17:09:36.183378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:34.790 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=81137 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:31:37.322 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:31:37.322 [global] 00:31:37.322 thread=1 00:31:37.322 invalidate=1 00:31:37.322 rw=write 00:31:37.322 time_based=1 00:31:37.322 runtime=60 00:31:37.322 ioengine=libaio 00:31:37.322 direct=1 00:31:37.322 bs=4096 00:31:37.322 iodepth=1 00:31:37.322 norandommap=0 00:31:37.322 numjobs=1 00:31:37.322 00:31:37.322 verify_dump=1 00:31:37.322 verify_backlog=512 00:31:37.322 verify_state_save=0 00:31:37.322 do_verify=1 00:31:37.322 verify=crc32c-intel 00:31:37.322 [job0] 00:31:37.322 filename=/dev/nvme0n1 00:31:37.322 Could not set queue depth (nvme0n1) 00:31:37.322 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.322 fio-3.35 00:31:37.322 Starting 1 thread 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:39.851 true 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:39.851 true 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:39.851 true 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:39.851 true 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.851 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:43.168 true 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:43.168 true 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:43.168 true 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:43.168 true 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:31:43.168 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 81137 00:32:39.391 00:32:39.391 job0: (groupid=0, jobs=1): err= 0: pid=81164: Mon Jul 22 17:10:38 2024 00:32:39.391 read: IOPS=697, BW=2792KiB/s (2859kB/s)(164MiB/60000msec) 00:32:39.391 slat (usec): min=7, max=459, avg=15.63, stdev= 6.78 00:32:39.391 clat (usec): min=147, max=3992, avg=241.54, stdev=74.13 00:32:39.391 lat (usec): min=177, max=4030, avg=257.17, stdev=76.92 00:32:39.391 clat percentiles (usec): 00:32:39.391 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:32:39.391 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:32:39.391 | 70.00th=[ 243], 80.00th=[ 262], 90.00th=[ 306], 95.00th=[ 347], 00:32:39.391 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 783], 99.95th=[ 988], 00:32:39.391 | 99.99th=[ 2868] 00:32:39.391 write: IOPS=699, BW=2799KiB/s (2866kB/s)(164MiB/60000msec); 0 zone resets 00:32:39.391 slat (usec): min=10, max=11410, avg=23.06, stdev=67.56 00:32:39.391 clat (usec): min=75, max=40615k, avg=1146.12, stdev=198215.51 00:32:39.391 lat (usec): min=132, max=40615k, avg=1169.18, stdev=198215.51 00:32:39.391 clat percentiles (usec): 00:32:39.391 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:32:39.391 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:32:39.391 | 70.00th=[ 184], 80.00th=[ 198], 90.00th=[ 223], 95.00th=[ 249], 00:32:39.391 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 676], 99.95th=[ 930], 00:32:39.391 | 99.99th=[ 1811] 00:32:39.391 bw ( KiB/s): min= 1128, max=11200, per=100.00%, avg=8417.44, stdev=1856.26, samples=39 00:32:39.391 iops : min= 282, max= 2800, avg=2104.36, stdev=464.06, samples=39 00:32:39.391 lat (usec) : 100=0.01%, 250=85.06%, 500=14.26%, 750=0.58%, 1000=0.06% 00:32:39.391 lat (msec) : 2=0.03%, 4=0.01%, >=2000=0.01% 00:32:39.391 cpu : usr=0.53%, sys=2.13%, ctx=83939, majf=0, minf=2 00:32:39.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.391 issued rwts: total=41875,41984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:39.391 00:32:39.391 Run status group 0 (all jobs): 00:32:39.391 READ: bw=2792KiB/s (2859kB/s), 2792KiB/s-2792KiB/s (2859kB/s-2859kB/s), io=164MiB (172MB), run=60000-60000msec 00:32:39.391 WRITE: bw=2799KiB/s (2866kB/s), 2799KiB/s-2799KiB/s (2866kB/s-2866kB/s), io=164MiB (172MB), run=60000-60000msec 00:32:39.391 00:32:39.391 Disk stats (read/write): 00:32:39.391 nvme0n1: ios=41798/41984, merge=0/0, ticks=10292/7824, in_queue=18116, util=99.52% 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:39.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:32:39.391 nvmf hotplug test: fio successful as expected 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:39.391 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:39.391 rmmod nvme_tcp 00:32:39.391 rmmod nvme_fabrics 00:32:39.391 rmmod nvme_keyring 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 81077 ']' 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 81077 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 81077 ']' 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 81077 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81077 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:39.392 killing process with pid 81077 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81077' 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 81077 00:32:39.392 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 81077 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:39.392 00:32:39.392 real 1m6.576s 00:32:39.392 user 3m54.601s 00:32:39.392 sys 0m24.531s 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:39.392 ************************************ 00:32:39.392 END TEST nvmf_initiator_timeout 00:32:39.392 ************************************ 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:32:39.392 00:32:39.392 real 7m20.498s 00:32:39.392 user 17m34.988s 00:32:39.392 sys 2m9.581s 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.392 17:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:39.392 ************************************ 00:32:39.392 END TEST nvmf_target_extra 00:32:39.392 ************************************ 00:32:39.392 17:10:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:39.392 17:10:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:39.392 17:10:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:39.392 17:10:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.392 17:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.392 ************************************ 00:32:39.392 START TEST nvmf_host 00:32:39.392 ************************************ 00:32:39.392 17:10:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:39.651 * Looking for test storage... 00:32:39.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.651 ************************************ 00:32:39.651 START TEST nvmf_identify 00:32:39.651 ************************************ 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:39.651 * Looking for test storage... 00:32:39.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.651 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:39.652 Cannot find device "nvmf_tgt_br" 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:39.652 Cannot find device "nvmf_tgt_br2" 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:32:39.652 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:39.910 Cannot find device "nvmf_tgt_br" 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:39.910 Cannot find device "nvmf_tgt_br2" 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:39.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:39.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:39.910 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:40.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:32:40.169 00:32:40.169 --- 10.0.0.2 ping statistics --- 00:32:40.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.169 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:40.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:40.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:32:40.169 00:32:40.169 --- 10.0.0.3 ping statistics --- 00:32:40.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.169 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:40.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:32:40.169 00:32:40.169 --- 10.0.0.1 ping statistics --- 00:32:40.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.169 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=82016 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 82016 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 82016 ']' 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:40.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:40.169 17:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:40.428 [2024-07-22 17:10:41.807026] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:40.428 [2024-07-22 17:10:41.807283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:40.428 [2024-07-22 17:10:41.999804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:40.687 [2024-07-22 17:10:42.265873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:40.687 [2024-07-22 17:10:42.265936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:40.687 [2024-07-22 17:10:42.265951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:40.687 [2024-07-22 17:10:42.265966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:40.687 [2024-07-22 17:10:42.265980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:40.687 [2024-07-22 17:10:42.266283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.687 [2024-07-22 17:10:42.266717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:40.687 [2024-07-22 17:10:42.267232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.687 [2024-07-22 17:10:42.267289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:40.945 [2024-07-22 17:10:42.538774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.203 [2024-07-22 17:10:42.733267] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.203 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.462 Malloc0 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.462 [2024-07-22 17:10:42.926036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.462 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.462 [ 00:32:41.462 { 00:32:41.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:41.462 "subtype": "Discovery", 00:32:41.462 "listen_addresses": [ 00:32:41.462 { 00:32:41.462 "trtype": "TCP", 00:32:41.462 "adrfam": "IPv4", 00:32:41.462 "traddr": "10.0.0.2", 00:32:41.462 "trsvcid": "4420" 00:32:41.462 } 00:32:41.462 ], 00:32:41.462 "allow_any_host": true, 00:32:41.462 "hosts": [] 00:32:41.462 }, 00:32:41.462 { 00:32:41.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.462 "subtype": "NVMe", 00:32:41.462 "listen_addresses": [ 00:32:41.462 { 00:32:41.462 "trtype": "TCP", 00:32:41.462 "adrfam": "IPv4", 00:32:41.462 "traddr": "10.0.0.2", 00:32:41.462 "trsvcid": "4420" 00:32:41.462 } 00:32:41.462 ], 00:32:41.462 "allow_any_host": true, 00:32:41.462 "hosts": [], 00:32:41.462 "serial_number": "SPDK00000000000001", 00:32:41.462 "model_number": "SPDK bdev Controller", 00:32:41.462 "max_namespaces": 32, 00:32:41.462 "min_cntlid": 1, 00:32:41.462 "max_cntlid": 65519, 00:32:41.462 "namespaces": [ 00:32:41.462 { 00:32:41.462 "nsid": 1, 00:32:41.462 "bdev_name": "Malloc0", 00:32:41.462 "name": "Malloc0", 00:32:41.462 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:41.462 "eui64": "ABCDEF0123456789", 00:32:41.462 "uuid": "7245c8e7-3363-408f-8de1-a447ad376164" 00:32:41.462 } 00:32:41.462 ] 00:32:41.462 } 00:32:41.463 ] 00:32:41.463 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.463 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:41.463 [2024-07-22 17:10:43.022820] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:41.463 [2024-07-22 17:10:43.022961] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82057 ] 00:32:41.724 [2024-07-22 17:10:43.212187] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:41.724 [2024-07-22 17:10:43.212395] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:41.724 [2024-07-22 17:10:43.212414] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:41.724 [2024-07-22 17:10:43.212454] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:41.724 [2024-07-22 17:10:43.212478] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:41.724 [2024-07-22 17:10:43.212740] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:41.724 [2024-07-22 17:10:43.212859] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:32:41.724 [2024-07-22 17:10:43.217297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:41.724 [2024-07-22 17:10:43.217342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:41.724 [2024-07-22 17:10:43.217353] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:41.724 [2024-07-22 17:10:43.217368] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:41.724 [2024-07-22 17:10:43.217470] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.724 [2024-07-22 17:10:43.217487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.724 [2024-07-22 17:10:43.217496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.217525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:41.725 [2024-07-22 17:10:43.217576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.225285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.225321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.225346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.225366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.225400] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:41.725 [2024-07-22 17:10:43.225423] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:41.725 [2024-07-22 17:10:43.225436] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:41.725 [2024-07-22 17:10:43.225465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.225475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.225483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.225505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.725 [2024-07-22 17:10:43.225546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.225726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.225744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.225752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.225761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.225773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:41.725 [2024-07-22 17:10:43.225787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:41.725 [2024-07-22 17:10:43.225801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.225809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.225817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.225834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.725 [2024-07-22 17:10:43.225862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.225974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.225984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.225991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.226013] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:41.725 [2024-07-22 17:10:43.226027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:41.725 [2024-07-22 17:10:43.226040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.226069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.725 [2024-07-22 17:10:43.226090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.226188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.226198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.226205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.226223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:41.725 [2024-07-22 17:10:43.226239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.226298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.725 [2024-07-22 17:10:43.226322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.226425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.226439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.226447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.226464] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:41.725 [2024-07-22 17:10:43.226474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:41.725 [2024-07-22 17:10:43.226487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:41.725 [2024-07-22 17:10:43.226597] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:41.725 [2024-07-22 17:10:43.226612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:41.725 [2024-07-22 17:10:43.226628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.226661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.725 [2024-07-22 17:10:43.226684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.226803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.226825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.226832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.226850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:41.725 [2024-07-22 17:10:43.226867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.226883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.226896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.725 [2024-07-22 17:10:43.226917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.227016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.227026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.227033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.227040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.227050] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:41.725 [2024-07-22 17:10:43.227063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:41.725 [2024-07-22 17:10:43.227076] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:41.725 [2024-07-22 17:10:43.227097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:41.725 [2024-07-22 17:10:43.227123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.227132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.725 [2024-07-22 17:10:43.227146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.725 [2024-07-22 17:10:43.227185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.725 [2024-07-22 17:10:43.227379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.725 [2024-07-22 17:10:43.227392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.725 [2024-07-22 17:10:43.227399] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.227408] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:32:41.725 [2024-07-22 17:10:43.227418] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:32:41.725 [2024-07-22 17:10:43.227430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.227462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.227471] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.227486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.725 [2024-07-22 17:10:43.227507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.725 [2024-07-22 17:10:43.227514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.725 [2024-07-22 17:10:43.227521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.725 [2024-07-22 17:10:43.227540] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:41.725 [2024-07-22 17:10:43.227550] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:41.725 [2024-07-22 17:10:43.227559] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:41.726 [2024-07-22 17:10:43.227573] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:41.726 [2024-07-22 17:10:43.227588] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:41.726 [2024-07-22 17:10:43.227598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:41.726 [2024-07-22 17:10:43.227613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:41.726 [2024-07-22 17:10:43.227628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.227659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:41.726 [2024-07-22 17:10:43.227685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.726 [2024-07-22 17:10:43.227800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.726 [2024-07-22 17:10:43.227810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.726 [2024-07-22 17:10:43.227826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.726 [2024-07-22 17:10:43.227847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.227883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.726 [2024-07-22 17:10:43.227895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.227919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.726 [2024-07-22 17:10:43.227929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.227953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.726 [2024-07-22 17:10:43.227963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.227979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.227990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.726 [2024-07-22 17:10:43.227999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:41.726 [2024-07-22 17:10:43.228018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:41.726 [2024-07-22 17:10:43.228030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.228051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.726 [2024-07-22 17:10:43.228076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.726 [2024-07-22 17:10:43.228085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:32:41.726 [2024-07-22 17:10:43.228093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:32:41.726 [2024-07-22 17:10:43.228101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.726 [2024-07-22 17:10:43.228108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.726 [2024-07-22 17:10:43.228316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.726 [2024-07-22 17:10:43.228337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.726 [2024-07-22 17:10:43.228344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.726 [2024-07-22 17:10:43.228363] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:41.726 [2024-07-22 17:10:43.228374] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:41.726 [2024-07-22 17:10:43.228396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.228421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.726 [2024-07-22 17:10:43.228449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.726 [2024-07-22 17:10:43.228584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.726 [2024-07-22 17:10:43.228595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.726 [2024-07-22 17:10:43.228602] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228611] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:32:41.726 [2024-07-22 17:10:43.228620] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:32:41.726 [2024-07-22 17:10:43.228628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228642] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228649] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.726 [2024-07-22 17:10:43.228678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.726 [2024-07-22 17:10:43.228684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.726 [2024-07-22 17:10:43.228728] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:41.726 [2024-07-22 17:10:43.228811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.228845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.726 [2024-07-22 17:10:43.228858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.228873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.228892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.726 [2024-07-22 17:10:43.228921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.726 [2024-07-22 17:10:43.228935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:41.726 [2024-07-22 17:10:43.233313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.726 [2024-07-22 17:10:43.233357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.726 [2024-07-22 17:10:43.233367] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233376] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:32:41.726 [2024-07-22 17:10:43.233387] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:32:41.726 [2024-07-22 17:10:43.233398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233412] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233420] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.726 [2024-07-22 17:10:43.233445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.726 [2024-07-22 17:10:43.233452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:32:41.726 [2024-07-22 17:10:43.233475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.726 [2024-07-22 17:10:43.233485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.726 [2024-07-22 17:10:43.233491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.726 [2024-07-22 17:10:43.233533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.726 [2024-07-22 17:10:43.233568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.726 [2024-07-22 17:10:43.233607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.726 [2024-07-22 17:10:43.233837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.726 [2024-07-22 17:10:43.233856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.726 [2024-07-22 17:10:43.233863] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233871] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:32:41.726 [2024-07-22 17:10:43.233879] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:32:41.726 [2024-07-22 17:10:43.233888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233899] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.726 [2024-07-22 17:10:43.233906] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.727 [2024-07-22 17:10:43.233921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.727 [2024-07-22 17:10:43.233947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.727 [2024-07-22 17:10:43.233954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.727 [2024-07-22 17:10:43.233961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.727 [2024-07-22 17:10:43.233981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.727 [2024-07-22 17:10:43.233990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.727 [2024-07-22 17:10:43.234004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.727 [2024-07-22 17:10:43.234036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.727 [2024-07-22 17:10:43.234220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.727 [2024-07-22 17:10:43.234234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.727 [2024-07-22 17:10:43.234241] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.727 [2024-07-22 17:10:43.234248] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:32:41.727 [2024-07-22 17:10:43.234256] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:32:41.727 [2024-07-22 17:10:43.234307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.727 [2024-07-22 17:10:43.234322] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.727 [2024-07-22 17:10:43.234330] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.727 [2024-07-22 17:10:43.234352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.727 [2024-07-22 17:10:43.234363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.727 [2024-07-22 17:10:43.234369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.727 ===================================================== 00:32:41.727 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:41.727 ===================================================== 00:32:41.727 Controller Capabilities/Features 00:32:41.727 ================================ 00:32:41.727 Vendor ID: 0000 00:32:41.727 Subsystem Vendor ID: 0000 00:32:41.727 Serial Number: .................... 00:32:41.727 Model Number: ........................................ 00:32:41.727 Firmware Version: 24.09 00:32:41.727 Recommended Arb Burst: 0 00:32:41.727 IEEE OUI Identifier: 00 00 00 00:32:41.727 Multi-path I/O 00:32:41.727 May have multiple subsystem ports: No 00:32:41.727 May have multiple controllers: No 00:32:41.727 Associated with SR-IOV VF: No 00:32:41.727 Max Data Transfer Size: 131072 00:32:41.727 Max Number of Namespaces: 0 00:32:41.727 Max Number of I/O Queues: 1024 00:32:41.727 NVMe Specification Version (VS): 1.3 00:32:41.727 NVMe Specification Version (Identify): 1.3 00:32:41.727 Maximum Queue Entries: 128 00:32:41.727 Contiguous Queues Required: Yes 00:32:41.727 Arbitration Mechanisms Supported 00:32:41.727 Weighted Round Robin: Not Supported 00:32:41.727 Vendor Specific: Not Supported 00:32:41.727 Reset Timeout: 15000 ms 00:32:41.727 Doorbell Stride: 4 bytes 00:32:41.727 NVM Subsystem Reset: Not Supported 00:32:41.727 Command Sets Supported 00:32:41.727 NVM Command Set: Supported 00:32:41.727 Boot Partition: Not Supported 00:32:41.727 Memory Page Size Minimum: 4096 bytes 00:32:41.727 Memory Page Size Maximum: 4096 bytes 00:32:41.727 Persistent Memory Region: Not Supported 00:32:41.727 Optional Asynchronous Events Supported 00:32:41.727 Namespace Attribute Notices: Not Supported 00:32:41.727 Firmware Activation Notices: Not Supported 00:32:41.727 ANA Change Notices: Not Supported 00:32:41.727 PLE Aggregate Log Change Notices: Not Supported 00:32:41.727 LBA Status Info Alert Notices: Not Supported 00:32:41.727 EGE Aggregate Log Change Notices: Not Supported 00:32:41.727 Normal NVM Subsystem Shutdown event: Not Supported 00:32:41.727 Zone Descriptor Change Notices: Not Supported 00:32:41.727 Discovery Log Change Notices: Supported 00:32:41.727 Controller Attributes 00:32:41.727 128-bit Host Identifier: Not Supported 00:32:41.727 Non-Operational Permissive Mode: Not Supported 00:32:41.727 NVM Sets: Not Supported 00:32:41.727 Read Recovery Levels: Not Supported 00:32:41.727 Endurance Groups: Not Supported 00:32:41.727 Predictable Latency Mode: Not Supported 00:32:41.727 Traffic Based Keep ALive: Not Supported 00:32:41.727 Namespace Granularity: Not Supported 00:32:41.727 SQ Associations: Not Supported 00:32:41.727 UUID List: Not Supported 00:32:41.727 Multi-Domain Subsystem: Not Supported 00:32:41.727 Fixed Capacity Management: Not Supported 00:32:41.727 Variable Capacity Management: Not Supported 00:32:41.727 Delete Endurance Group: Not Supported 00:32:41.727 Delete NVM Set: Not Supported 00:32:41.727 Extended LBA Formats Supported: Not Supported 00:32:41.727 Flexible Data Placement Supported: Not Supported 00:32:41.727 00:32:41.727 Controller Memory Buffer Support 00:32:41.727 ================================ 00:32:41.727 Supported: No 00:32:41.727 00:32:41.727 Persistent Memory Region Support 00:32:41.727 ================================ 00:32:41.727 Supported: No 00:32:41.727 00:32:41.727 Admin Command Set Attributes 00:32:41.727 ============================ 00:32:41.727 Security Send/Receive: Not Supported 00:32:41.727 Format NVM: Not Supported 00:32:41.727 Firmware Activate/Download: Not Supported 00:32:41.727 Namespace Management: Not Supported 00:32:41.727 Device Self-Test: Not Supported 00:32:41.727 Directives: Not Supported 00:32:41.727 NVMe-MI: Not Supported 00:32:41.727 Virtualization Management: Not Supported 00:32:41.727 Doorbell Buffer Config: Not Supported 00:32:41.727 Get LBA Status Capability: Not Supported 00:32:41.727 Command & Feature Lockdown Capability: Not Supported 00:32:41.727 Abort Command Limit: 1 00:32:41.727 Async Event Request Limit: 4 00:32:41.727 Number of Firmware Slots: N/A 00:32:41.727 Firmware Slot 1 Read-Only: N/A 00:32:41.727 Firm[2024-07-22 17:10:43.234377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.727 ware Activation Without Reset: N/A 00:32:41.727 Multiple Update Detection Support: N/A 00:32:41.727 Firmware Update Granularity: No Information Provided 00:32:41.727 Per-Namespace SMART Log: No 00:32:41.727 Asymmetric Namespace Access Log Page: Not Supported 00:32:41.727 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:41.727 Command Effects Log Page: Not Supported 00:32:41.727 Get Log Page Extended Data: Supported 00:32:41.727 Telemetry Log Pages: Not Supported 00:32:41.727 Persistent Event Log Pages: Not Supported 00:32:41.727 Supported Log Pages Log Page: May Support 00:32:41.727 Commands Supported & Effects Log Page: Not Supported 00:32:41.727 Feature Identifiers & Effects Log Page:May Support 00:32:41.727 NVMe-MI Commands & Effects Log Page: May Support 00:32:41.727 Data Area 4 for Telemetry Log: Not Supported 00:32:41.727 Error Log Page Entries Supported: 128 00:32:41.727 Keep Alive: Not Supported 00:32:41.727 00:32:41.727 NVM Command Set Attributes 00:32:41.727 ========================== 00:32:41.727 Submission Queue Entry Size 00:32:41.727 Max: 1 00:32:41.727 Min: 1 00:32:41.727 Completion Queue Entry Size 00:32:41.727 Max: 1 00:32:41.727 Min: 1 00:32:41.727 Number of Namespaces: 0 00:32:41.727 Compare Command: Not Supported 00:32:41.727 Write Uncorrectable Command: Not Supported 00:32:41.727 Dataset Management Command: Not Supported 00:32:41.727 Write Zeroes Command: Not Supported 00:32:41.727 Set Features Save Field: Not Supported 00:32:41.727 Reservations: Not Supported 00:32:41.727 Timestamp: Not Supported 00:32:41.727 Copy: Not Supported 00:32:41.727 Volatile Write Cache: Not Present 00:32:41.727 Atomic Write Unit (Normal): 1 00:32:41.727 Atomic Write Unit (PFail): 1 00:32:41.727 Atomic Compare & Write Unit: 1 00:32:41.727 Fused Compare & Write: Supported 00:32:41.727 Scatter-Gather List 00:32:41.727 SGL Command Set: Supported 00:32:41.727 SGL Keyed: Supported 00:32:41.727 SGL Bit Bucket Descriptor: Not Supported 00:32:41.727 SGL Metadata Pointer: Not Supported 00:32:41.727 Oversized SGL: Not Supported 00:32:41.727 SGL Metadata Address: Not Supported 00:32:41.727 SGL Offset: Supported 00:32:41.727 Transport SGL Data Block: Not Supported 00:32:41.727 Replay Protected Memory Block: Not Supported 00:32:41.727 00:32:41.727 Firmware Slot Information 00:32:41.727 ========================= 00:32:41.727 Active slot: 0 00:32:41.727 00:32:41.727 00:32:41.727 Error Log 00:32:41.727 ========= 00:32:41.727 00:32:41.727 Active Namespaces 00:32:41.727 ================= 00:32:41.727 Discovery Log Page 00:32:41.728 ================== 00:32:41.728 Generation Counter: 2 00:32:41.728 Number of Records: 2 00:32:41.728 Record Format: 0 00:32:41.728 00:32:41.728 Discovery Log Entry 0 00:32:41.728 ---------------------- 00:32:41.728 Transport Type: 3 (TCP) 00:32:41.728 Address Family: 1 (IPv4) 00:32:41.728 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:41.728 Entry Flags: 00:32:41.728 Duplicate Returned Information: 1 00:32:41.728 Explicit Persistent Connection Support for Discovery: 1 00:32:41.728 Transport Requirements: 00:32:41.728 Secure Channel: Not Required 00:32:41.728 Port ID: 0 (0x0000) 00:32:41.728 Controller ID: 65535 (0xffff) 00:32:41.728 Admin Max SQ Size: 128 00:32:41.728 Transport Service Identifier: 4420 00:32:41.728 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:41.728 Transport Address: 10.0.0.2 00:32:41.728 Discovery Log Entry 1 00:32:41.728 ---------------------- 00:32:41.728 Transport Type: 3 (TCP) 00:32:41.728 Address Family: 1 (IPv4) 00:32:41.728 Subsystem Type: 2 (NVM Subsystem) 00:32:41.728 Entry Flags: 00:32:41.728 Duplicate Returned Information: 0 00:32:41.728 Explicit Persistent Connection Support for Discovery: 0 00:32:41.728 Transport Requirements: 00:32:41.728 Secure Channel: Not Required 00:32:41.728 Port ID: 0 (0x0000) 00:32:41.728 Controller ID: 65535 (0xffff) 00:32:41.728 Admin Max SQ Size: 128 00:32:41.728 Transport Service Identifier: 4420 00:32:41.728 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:41.728 Transport Address: 10.0.0.2 [2024-07-22 17:10:43.234559] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:41.728 [2024-07-22 17:10:43.234578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.234593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.728 [2024-07-22 17:10:43.234604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.234618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.728 [2024-07-22 17:10:43.234630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.234640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.728 [2024-07-22 17:10:43.234649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.234659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.728 [2024-07-22 17:10:43.234679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.234691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.234699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.234714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.728 [2024-07-22 17:10:43.234741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.728 [2024-07-22 17:10:43.234845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.728 [2024-07-22 17:10:43.234857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.728 [2024-07-22 17:10:43.234865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.234873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.234888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.234899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.234907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.234924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.728 [2024-07-22 17:10:43.234952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.728 [2024-07-22 17:10:43.235101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.728 [2024-07-22 17:10:43.235121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.728 [2024-07-22 17:10:43.235128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.235147] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:41.728 [2024-07-22 17:10:43.235157] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:41.728 [2024-07-22 17:10:43.235173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.235220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.728 [2024-07-22 17:10:43.235260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.728 [2024-07-22 17:10:43.235397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.728 [2024-07-22 17:10:43.235408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.728 [2024-07-22 17:10:43.235414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.235438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.235464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.728 [2024-07-22 17:10:43.235487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.728 [2024-07-22 17:10:43.235596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.728 [2024-07-22 17:10:43.235611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.728 [2024-07-22 17:10:43.235617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.235645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.235671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.728 [2024-07-22 17:10:43.235692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.728 [2024-07-22 17:10:43.235782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.728 [2024-07-22 17:10:43.235792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.728 [2024-07-22 17:10:43.235799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.235830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.235845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.235857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.728 [2024-07-22 17:10:43.235894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.728 [2024-07-22 17:10:43.236007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.728 [2024-07-22 17:10:43.236017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.728 [2024-07-22 17:10:43.236025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.236032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.236051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.236059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.236066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.236078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.728 [2024-07-22 17:10:43.236098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.728 [2024-07-22 17:10:43.236195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.728 [2024-07-22 17:10:43.236206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.728 [2024-07-22 17:10:43.236213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.236220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.728 [2024-07-22 17:10:43.236235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.236242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.728 [2024-07-22 17:10:43.236249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.728 [2024-07-22 17:10:43.236276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.729 [2024-07-22 17:10:43.236299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.729 [2024-07-22 17:10:43.236400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.729 [2024-07-22 17:10:43.236415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.729 [2024-07-22 17:10:43.236422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.729 [2024-07-22 17:10:43.236445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.729 [2024-07-22 17:10:43.236476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.729 [2024-07-22 17:10:43.236497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.729 [2024-07-22 17:10:43.236579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.729 [2024-07-22 17:10:43.236589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.729 [2024-07-22 17:10:43.236599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.729 [2024-07-22 17:10:43.236622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.729 [2024-07-22 17:10:43.236648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.729 [2024-07-22 17:10:43.236668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.729 [2024-07-22 17:10:43.236753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.729 [2024-07-22 17:10:43.236764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.729 [2024-07-22 17:10:43.236771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.729 [2024-07-22 17:10:43.236798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.729 [2024-07-22 17:10:43.236828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.729 [2024-07-22 17:10:43.236849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.729 [2024-07-22 17:10:43.236931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.729 [2024-07-22 17:10:43.236941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.729 [2024-07-22 17:10:43.236948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.729 [2024-07-22 17:10:43.236973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.236988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.729 [2024-07-22 17:10:43.237000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.729 [2024-07-22 17:10:43.237020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.729 [2024-07-22 17:10:43.237122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.729 [2024-07-22 17:10:43.237139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.729 [2024-07-22 17:10:43.237145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.237152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.729 [2024-07-22 17:10:43.237167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.237185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.237193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.729 [2024-07-22 17:10:43.237204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.729 [2024-07-22 17:10:43.237228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.729 [2024-07-22 17:10:43.241301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.729 [2024-07-22 17:10:43.241338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.729 [2024-07-22 17:10:43.241346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.241356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.729 [2024-07-22 17:10:43.241384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.241393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.241400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.729 [2024-07-22 17:10:43.241417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.729 [2024-07-22 17:10:43.241458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.729 [2024-07-22 17:10:43.241586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.729 [2024-07-22 17:10:43.241596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.729 [2024-07-22 17:10:43.241602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.729 [2024-07-22 17:10:43.241610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.729 [2024-07-22 17:10:43.241626] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:32:41.729 00:32:41.729 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:41.991 [2024-07-22 17:10:43.367026] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:41.991 [2024-07-22 17:10:43.367142] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82060 ] 00:32:41.991 [2024-07-22 17:10:43.536099] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:41.991 [2024-07-22 17:10:43.540311] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:41.991 [2024-07-22 17:10:43.540349] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:41.991 [2024-07-22 17:10:43.540385] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:41.991 [2024-07-22 17:10:43.540403] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:41.991 [2024-07-22 17:10:43.540596] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:41.991 [2024-07-22 17:10:43.540665] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:32:41.991 [2024-07-22 17:10:43.548279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:41.991 [2024-07-22 17:10:43.548344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:41.991 [2024-07-22 17:10:43.548356] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:41.991 [2024-07-22 17:10:43.548369] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:41.991 [2024-07-22 17:10:43.548473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.548486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.548495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.991 [2024-07-22 17:10:43.548522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:41.991 [2024-07-22 17:10:43.548569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.991 [2024-07-22 17:10:43.555319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.991 [2024-07-22 17:10:43.555362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.991 [2024-07-22 17:10:43.555377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.991 [2024-07-22 17:10:43.555411] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:41.991 [2024-07-22 17:10:43.555433] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:41.991 [2024-07-22 17:10:43.555446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:41.991 [2024-07-22 17:10:43.555478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.991 [2024-07-22 17:10:43.555518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.991 [2024-07-22 17:10:43.555566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.991 [2024-07-22 17:10:43.555655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.991 [2024-07-22 17:10:43.555666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.991 [2024-07-22 17:10:43.555674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.991 [2024-07-22 17:10:43.555695] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:41.991 [2024-07-22 17:10:43.555711] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:41.991 [2024-07-22 17:10:43.555725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.991 [2024-07-22 17:10:43.555761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.991 [2024-07-22 17:10:43.555784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.991 [2024-07-22 17:10:43.555868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.991 [2024-07-22 17:10:43.555880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.991 [2024-07-22 17:10:43.555887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.991 [2024-07-22 17:10:43.555906] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:41.991 [2024-07-22 17:10:43.555921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:41.991 [2024-07-22 17:10:43.555933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.555949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.991 [2024-07-22 17:10:43.555962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.991 [2024-07-22 17:10:43.555989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.991 [2024-07-22 17:10:43.556043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.991 [2024-07-22 17:10:43.556054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.991 [2024-07-22 17:10:43.556061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.556069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.991 [2024-07-22 17:10:43.556079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:41.991 [2024-07-22 17:10:43.556100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.556108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.556116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.991 [2024-07-22 17:10:43.556132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.991 [2024-07-22 17:10:43.556157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.991 [2024-07-22 17:10:43.556209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.991 [2024-07-22 17:10:43.556219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.991 [2024-07-22 17:10:43.556226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.991 [2024-07-22 17:10:43.556233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.991 [2024-07-22 17:10:43.556243] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:41.992 [2024-07-22 17:10:43.556282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:41.992 [2024-07-22 17:10:43.556319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:41.992 [2024-07-22 17:10:43.556430] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:41.992 [2024-07-22 17:10:43.556446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:41.992 [2024-07-22 17:10:43.556463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.556476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.556485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.556500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.992 [2024-07-22 17:10:43.556530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.992 [2024-07-22 17:10:43.556602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.992 [2024-07-22 17:10:43.556613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.992 [2024-07-22 17:10:43.556620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.556628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.992 [2024-07-22 17:10:43.556639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:41.992 [2024-07-22 17:10:43.556656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.556664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.556672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.556686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.992 [2024-07-22 17:10:43.556711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.992 [2024-07-22 17:10:43.556769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.992 [2024-07-22 17:10:43.556780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.992 [2024-07-22 17:10:43.556787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.556794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.992 [2024-07-22 17:10:43.556803] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:41.992 [2024-07-22 17:10:43.556813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.556826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:41.992 [2024-07-22 17:10:43.556846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.556874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.556883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.556896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.992 [2024-07-22 17:10:43.556935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.992 [2024-07-22 17:10:43.557061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.992 [2024-07-22 17:10:43.557072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.992 [2024-07-22 17:10:43.557079] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557087] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:32:41.992 [2024-07-22 17:10:43.557098] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:32:41.992 [2024-07-22 17:10:43.557113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557130] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557139] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.992 [2024-07-22 17:10:43.557165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.992 [2024-07-22 17:10:43.557172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.992 [2024-07-22 17:10:43.557198] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:41.992 [2024-07-22 17:10:43.557208] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:41.992 [2024-07-22 17:10:43.557217] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:41.992 [2024-07-22 17:10:43.557229] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:41.992 [2024-07-22 17:10:43.557239] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:41.992 [2024-07-22 17:10:43.557262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.557278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.557294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.557329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:41.992 [2024-07-22 17:10:43.557354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.992 [2024-07-22 17:10:43.557413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.992 [2024-07-22 17:10:43.557428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.992 [2024-07-22 17:10:43.557435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.992 [2024-07-22 17:10:43.557459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.557491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.992 [2024-07-22 17:10:43.557503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.557529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.992 [2024-07-22 17:10:43.557539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.557564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.992 [2024-07-22 17:10:43.557577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.557602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.992 [2024-07-22 17:10:43.557610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.557631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.557642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.992 [2024-07-22 17:10:43.557663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.992 [2024-07-22 17:10:43.557688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:41.992 [2024-07-22 17:10:43.557697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:32:41.992 [2024-07-22 17:10:43.557706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:32:41.992 [2024-07-22 17:10:43.557714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.992 [2024-07-22 17:10:43.557726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.992 [2024-07-22 17:10:43.557821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.992 [2024-07-22 17:10:43.557831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.992 [2024-07-22 17:10:43.557838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.992 [2024-07-22 17:10:43.557859] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:41.992 [2024-07-22 17:10:43.557869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.557883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.557897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:41.992 [2024-07-22 17:10:43.557912] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.992 [2024-07-22 17:10:43.557931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.993 [2024-07-22 17:10:43.557944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:41.993 [2024-07-22 17:10:43.557965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.993 [2024-07-22 17:10:43.558023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.993 [2024-07-22 17:10:43.558033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.993 [2024-07-22 17:10:43.558039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.993 [2024-07-22 17:10:43.558151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.558182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.558201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.993 [2024-07-22 17:10:43.558223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.993 [2024-07-22 17:10:43.558266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.993 [2024-07-22 17:10:43.558372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.993 [2024-07-22 17:10:43.558384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.993 [2024-07-22 17:10:43.558391] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:32:41.993 [2024-07-22 17:10:43.558408] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:32:41.993 [2024-07-22 17:10:43.558421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558434] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558442] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.993 [2024-07-22 17:10:43.558468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.993 [2024-07-22 17:10:43.558475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.993 [2024-07-22 17:10:43.558532] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:41.993 [2024-07-22 17:10:43.558559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.558586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.558605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.993 [2024-07-22 17:10:43.558630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.993 [2024-07-22 17:10:43.558657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.993 [2024-07-22 17:10:43.558749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.993 [2024-07-22 17:10:43.558768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.993 [2024-07-22 17:10:43.558775] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558782] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:32:41.993 [2024-07-22 17:10:43.558791] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:32:41.993 [2024-07-22 17:10:43.558799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558810] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558817] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.993 [2024-07-22 17:10:43.558839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.993 [2024-07-22 17:10:43.558846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.993 [2024-07-22 17:10:43.558913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.558935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.558952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.558961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.993 [2024-07-22 17:10:43.558975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.993 [2024-07-22 17:10:43.559000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.993 [2024-07-22 17:10:43.559076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.993 [2024-07-22 17:10:43.559086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.993 [2024-07-22 17:10:43.559093] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.559100] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:32:41.993 [2024-07-22 17:10:43.559109] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:32:41.993 [2024-07-22 17:10:43.559118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.559129] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.559136] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.559149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.993 [2024-07-22 17:10:43.559162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.993 [2024-07-22 17:10:43.559169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.559176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.993 [2024-07-22 17:10:43.559212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.559241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.559258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.563302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.563338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.563350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.563373] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:41.993 [2024-07-22 17:10:43.563383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:41.993 [2024-07-22 17:10:43.563394] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:41.993 [2024-07-22 17:10:43.563452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.563462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.993 [2024-07-22 17:10:43.563485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.993 [2024-07-22 17:10:43.563498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.563511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.563519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:32:41.993 [2024-07-22 17:10:43.563532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.993 [2024-07-22 17:10:43.563573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.993 [2024-07-22 17:10:43.563584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:41.993 [2024-07-22 17:10:43.563692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.993 [2024-07-22 17:10:43.563703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.993 [2024-07-22 17:10:43.563711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.563720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.993 [2024-07-22 17:10:43.563733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.993 [2024-07-22 17:10:43.563746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.993 [2024-07-22 17:10:43.563753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.563760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:32:41.993 [2024-07-22 17:10:43.563776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.993 [2024-07-22 17:10:43.563784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:32:41.993 [2024-07-22 17:10:43.563796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.994 [2024-07-22 17:10:43.563832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:41.994 [2024-07-22 17:10:43.563899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.994 [2024-07-22 17:10:43.563909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.994 [2024-07-22 17:10:43.563916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.563923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:32:41.994 [2024-07-22 17:10:43.563939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.563946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:32:41.994 [2024-07-22 17:10:43.563958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.994 [2024-07-22 17:10:43.563979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:41.994 [2024-07-22 17:10:43.564039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.994 [2024-07-22 17:10:43.564049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.994 [2024-07-22 17:10:43.564056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:32:41.994 [2024-07-22 17:10:43.564082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:32:41.994 [2024-07-22 17:10:43.564118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.994 [2024-07-22 17:10:43.564141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:41.994 [2024-07-22 17:10:43.564194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.994 [2024-07-22 17:10:43.564209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.994 [2024-07-22 17:10:43.564216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:32:41.994 [2024-07-22 17:10:43.564295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:32:41.994 [2024-07-22 17:10:43.564323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.994 [2024-07-22 17:10:43.564337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:32:41.994 [2024-07-22 17:10:43.564358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.994 [2024-07-22 17:10:43.564371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:32:41.994 [2024-07-22 17:10:43.564396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.994 [2024-07-22 17:10:43.564413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:32:41.994 [2024-07-22 17:10:43.564434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.994 [2024-07-22 17:10:43.564460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:41.994 [2024-07-22 17:10:43.564477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:41.994 [2024-07-22 17:10:43.564485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:32:41.994 [2024-07-22 17:10:43.564494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:32:41.994 [2024-07-22 17:10:43.564664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.994 [2024-07-22 17:10:43.564683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.994 [2024-07-22 17:10:43.564691] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564699] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:32:41.994 [2024-07-22 17:10:43.564710] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:32:41.994 [2024-07-22 17:10:43.564720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564749] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564767] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.994 [2024-07-22 17:10:43.564794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.994 [2024-07-22 17:10:43.564801] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564809] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:32:41.994 [2024-07-22 17:10:43.564817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:32:41.994 [2024-07-22 17:10:43.564826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564836] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564843] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.994 [2024-07-22 17:10:43.564865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.994 [2024-07-22 17:10:43.564872] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564880] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:32:41.994 [2024-07-22 17:10:43.564889] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:32:41.994 [2024-07-22 17:10:43.564897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564910] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564917] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:41.994 [2024-07-22 17:10:43.564936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:41.994 [2024-07-22 17:10:43.564943] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564950] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:32:41.994 [2024-07-22 17:10:43.564958] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:32:41.994 [2024-07-22 17:10:43.564972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564983] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564990] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.564999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.994 [2024-07-22 17:10:43.565008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.994 [2024-07-22 17:10:43.565015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.565023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:32:41.994 [2024-07-22 17:10:43.565052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.994 [2024-07-22 17:10:43.565067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.994 [2024-07-22 17:10:43.565074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.565084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:32:41.994 [2024-07-22 17:10:43.565101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.994 [2024-07-22 17:10:43.565111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.994 [2024-07-22 17:10:43.565118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.565125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:32:41.994 [2024-07-22 17:10:43.565139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.994 [2024-07-22 17:10:43.565148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.994 [2024-07-22 17:10:43.565155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.994 [2024-07-22 17:10:43.565162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:32:41.994 ===================================================== 00:32:41.994 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:41.994 ===================================================== 00:32:41.994 Controller Capabilities/Features 00:32:41.994 ================================ 00:32:41.994 Vendor ID: 8086 00:32:41.994 Subsystem Vendor ID: 8086 00:32:41.994 Serial Number: SPDK00000000000001 00:32:41.994 Model Number: SPDK bdev Controller 00:32:41.994 Firmware Version: 24.09 00:32:41.994 Recommended Arb Burst: 6 00:32:41.994 IEEE OUI Identifier: e4 d2 5c 00:32:41.994 Multi-path I/O 00:32:41.994 May have multiple subsystem ports: Yes 00:32:41.994 May have multiple controllers: Yes 00:32:41.994 Associated with SR-IOV VF: No 00:32:41.994 Max Data Transfer Size: 131072 00:32:41.994 Max Number of Namespaces: 32 00:32:41.994 Max Number of I/O Queues: 127 00:32:41.994 NVMe Specification Version (VS): 1.3 00:32:41.994 NVMe Specification Version (Identify): 1.3 00:32:41.994 Maximum Queue Entries: 128 00:32:41.994 Contiguous Queues Required: Yes 00:32:41.994 Arbitration Mechanisms Supported 00:32:41.994 Weighted Round Robin: Not Supported 00:32:41.994 Vendor Specific: Not Supported 00:32:41.994 Reset Timeout: 15000 ms 00:32:41.994 Doorbell Stride: 4 bytes 00:32:41.994 NVM Subsystem Reset: Not Supported 00:32:41.994 Command Sets Supported 00:32:41.994 NVM Command Set: Supported 00:32:41.994 Boot Partition: Not Supported 00:32:41.994 Memory Page Size Minimum: 4096 bytes 00:32:41.994 Memory Page Size Maximum: 4096 bytes 00:32:41.994 Persistent Memory Region: Not Supported 00:32:41.995 Optional Asynchronous Events Supported 00:32:41.995 Namespace Attribute Notices: Supported 00:32:41.995 Firmware Activation Notices: Not Supported 00:32:41.995 ANA Change Notices: Not Supported 00:32:41.995 PLE Aggregate Log Change Notices: Not Supported 00:32:41.995 LBA Status Info Alert Notices: Not Supported 00:32:41.995 EGE Aggregate Log Change Notices: Not Supported 00:32:41.995 Normal NVM Subsystem Shutdown event: Not Supported 00:32:41.995 Zone Descriptor Change Notices: Not Supported 00:32:41.995 Discovery Log Change Notices: Not Supported 00:32:41.995 Controller Attributes 00:32:41.995 128-bit Host Identifier: Supported 00:32:41.995 Non-Operational Permissive Mode: Not Supported 00:32:41.995 NVM Sets: Not Supported 00:32:41.995 Read Recovery Levels: Not Supported 00:32:41.995 Endurance Groups: Not Supported 00:32:41.995 Predictable Latency Mode: Not Supported 00:32:41.995 Traffic Based Keep ALive: Not Supported 00:32:41.995 Namespace Granularity: Not Supported 00:32:41.995 SQ Associations: Not Supported 00:32:41.995 UUID List: Not Supported 00:32:41.995 Multi-Domain Subsystem: Not Supported 00:32:41.995 Fixed Capacity Management: Not Supported 00:32:41.995 Variable Capacity Management: Not Supported 00:32:41.995 Delete Endurance Group: Not Supported 00:32:41.995 Delete NVM Set: Not Supported 00:32:41.995 Extended LBA Formats Supported: Not Supported 00:32:41.995 Flexible Data Placement Supported: Not Supported 00:32:41.995 00:32:41.995 Controller Memory Buffer Support 00:32:41.995 ================================ 00:32:41.995 Supported: No 00:32:41.995 00:32:41.995 Persistent Memory Region Support 00:32:41.995 ================================ 00:32:41.995 Supported: No 00:32:41.995 00:32:41.995 Admin Command Set Attributes 00:32:41.995 ============================ 00:32:41.995 Security Send/Receive: Not Supported 00:32:41.995 Format NVM: Not Supported 00:32:41.995 Firmware Activate/Download: Not Supported 00:32:41.995 Namespace Management: Not Supported 00:32:41.995 Device Self-Test: Not Supported 00:32:41.995 Directives: Not Supported 00:32:41.995 NVMe-MI: Not Supported 00:32:41.995 Virtualization Management: Not Supported 00:32:41.995 Doorbell Buffer Config: Not Supported 00:32:41.995 Get LBA Status Capability: Not Supported 00:32:41.995 Command & Feature Lockdown Capability: Not Supported 00:32:41.995 Abort Command Limit: 4 00:32:41.995 Async Event Request Limit: 4 00:32:41.995 Number of Firmware Slots: N/A 00:32:41.995 Firmware Slot 1 Read-Only: N/A 00:32:41.995 Firmware Activation Without Reset: N/A 00:32:41.995 Multiple Update Detection Support: N/A 00:32:41.995 Firmware Update Granularity: No Information Provided 00:32:41.995 Per-Namespace SMART Log: No 00:32:41.995 Asymmetric Namespace Access Log Page: Not Supported 00:32:41.995 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:41.995 Command Effects Log Page: Supported 00:32:41.995 Get Log Page Extended Data: Supported 00:32:41.995 Telemetry Log Pages: Not Supported 00:32:41.995 Persistent Event Log Pages: Not Supported 00:32:41.995 Supported Log Pages Log Page: May Support 00:32:41.995 Commands Supported & Effects Log Page: Not Supported 00:32:41.995 Feature Identifiers & Effects Log Page:May Support 00:32:41.995 NVMe-MI Commands & Effects Log Page: May Support 00:32:41.995 Data Area 4 for Telemetry Log: Not Supported 00:32:41.995 Error Log Page Entries Supported: 128 00:32:41.995 Keep Alive: Supported 00:32:41.995 Keep Alive Granularity: 10000 ms 00:32:41.995 00:32:41.995 NVM Command Set Attributes 00:32:41.995 ========================== 00:32:41.995 Submission Queue Entry Size 00:32:41.995 Max: 64 00:32:41.995 Min: 64 00:32:41.995 Completion Queue Entry Size 00:32:41.995 Max: 16 00:32:41.995 Min: 16 00:32:41.995 Number of Namespaces: 32 00:32:41.995 Compare Command: Supported 00:32:41.995 Write Uncorrectable Command: Not Supported 00:32:41.995 Dataset Management Command: Supported 00:32:41.995 Write Zeroes Command: Supported 00:32:41.995 Set Features Save Field: Not Supported 00:32:41.995 Reservations: Supported 00:32:41.995 Timestamp: Not Supported 00:32:41.995 Copy: Supported 00:32:41.995 Volatile Write Cache: Present 00:32:41.995 Atomic Write Unit (Normal): 1 00:32:41.995 Atomic Write Unit (PFail): 1 00:32:41.995 Atomic Compare & Write Unit: 1 00:32:41.995 Fused Compare & Write: Supported 00:32:41.995 Scatter-Gather List 00:32:41.995 SGL Command Set: Supported 00:32:41.995 SGL Keyed: Supported 00:32:41.995 SGL Bit Bucket Descriptor: Not Supported 00:32:41.995 SGL Metadata Pointer: Not Supported 00:32:41.995 Oversized SGL: Not Supported 00:32:41.995 SGL Metadata Address: Not Supported 00:32:41.995 SGL Offset: Supported 00:32:41.995 Transport SGL Data Block: Not Supported 00:32:41.995 Replay Protected Memory Block: Not Supported 00:32:41.995 00:32:41.995 Firmware Slot Information 00:32:41.995 ========================= 00:32:41.995 Active slot: 1 00:32:41.995 Slot 1 Firmware Revision: 24.09 00:32:41.995 00:32:41.995 00:32:41.995 Commands Supported and Effects 00:32:41.995 ============================== 00:32:41.995 Admin Commands 00:32:41.995 -------------- 00:32:41.995 Get Log Page (02h): Supported 00:32:41.995 Identify (06h): Supported 00:32:41.995 Abort (08h): Supported 00:32:41.995 Set Features (09h): Supported 00:32:41.995 Get Features (0Ah): Supported 00:32:41.995 Asynchronous Event Request (0Ch): Supported 00:32:41.995 Keep Alive (18h): Supported 00:32:41.995 I/O Commands 00:32:41.995 ------------ 00:32:41.995 Flush (00h): Supported LBA-Change 00:32:41.995 Write (01h): Supported LBA-Change 00:32:41.995 Read (02h): Supported 00:32:41.995 Compare (05h): Supported 00:32:41.995 Write Zeroes (08h): Supported LBA-Change 00:32:41.995 Dataset Management (09h): Supported LBA-Change 00:32:41.995 Copy (19h): Supported LBA-Change 00:32:41.995 00:32:41.995 Error Log 00:32:41.995 ========= 00:32:41.995 00:32:41.995 Arbitration 00:32:41.995 =========== 00:32:41.995 Arbitration Burst: 1 00:32:41.995 00:32:41.995 Power Management 00:32:41.995 ================ 00:32:41.995 Number of Power States: 1 00:32:41.995 Current Power State: Power State #0 00:32:41.995 Power State #0: 00:32:41.995 Max Power: 0.00 W 00:32:41.995 Non-Operational State: Operational 00:32:41.995 Entry Latency: Not Reported 00:32:41.995 Exit Latency: Not Reported 00:32:41.995 Relative Read Throughput: 0 00:32:41.995 Relative Read Latency: 0 00:32:41.995 Relative Write Throughput: 0 00:32:41.995 Relative Write Latency: 0 00:32:41.995 Idle Power: Not Reported 00:32:41.995 Active Power: Not Reported 00:32:41.995 Non-Operational Permissive Mode: Not Supported 00:32:41.995 00:32:41.995 Health Information 00:32:41.995 ================== 00:32:41.995 Critical Warnings: 00:32:41.995 Available Spare Space: OK 00:32:41.995 Temperature: OK 00:32:41.995 Device Reliability: OK 00:32:41.995 Read Only: No 00:32:41.995 Volatile Memory Backup: OK 00:32:41.995 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:41.995 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:32:41.995 Available Spare: 0% 00:32:41.995 Available Spare Threshold: 0% 00:32:41.995 Life Percentage Used:[2024-07-22 17:10:43.565385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.995 [2024-07-22 17:10:43.565401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:32:41.995 [2024-07-22 17:10:43.565421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.995 [2024-07-22 17:10:43.565453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:32:41.995 [2024-07-22 17:10:43.565529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.995 [2024-07-22 17:10:43.565549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.995 [2024-07-22 17:10:43.565558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.995 [2024-07-22 17:10:43.565566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:32:41.995 [2024-07-22 17:10:43.565658] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:41.995 [2024-07-22 17:10:43.565675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.565689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.996 [2024-07-22 17:10:43.565700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.565710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.996 [2024-07-22 17:10:43.565720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.565743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.996 [2024-07-22 17:10:43.565753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.565763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.996 [2024-07-22 17:10:43.565786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.565798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.565810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.565825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.565855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.565931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.565948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.565956] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.565964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.565982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.565991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.566015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.566045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.566140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.566150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.566156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.566174] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:41.996 [2024-07-22 17:10:43.566184] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:41.996 [2024-07-22 17:10:43.566200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.566229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.566283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.566344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.566358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.566365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.566389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.566423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.566444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.566494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.566504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.566511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.566537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.566564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.566584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.566636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.566646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.566653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.566679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.566708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.566729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.566784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.566797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.566804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.566826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.566853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.566873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.566938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.566948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.566955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.566978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.566993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.567005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.567026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.567079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.567089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.567096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.567104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.567119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.567127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.567134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.996 [2024-07-22 17:10:43.567146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.996 [2024-07-22 17:10:43.567167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.996 [2024-07-22 17:10:43.567218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.996 [2024-07-22 17:10:43.567228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.996 [2024-07-22 17:10:43.567235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.567243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.996 [2024-07-22 17:10:43.567258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:41.996 [2024-07-22 17:10:43.571294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:41.997 [2024-07-22 17:10:43.571312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:32:41.997 [2024-07-22 17:10:43.571333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.997 [2024-07-22 17:10:43.571382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:41.997 [2024-07-22 17:10:43.571472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:41.997 [2024-07-22 17:10:43.571484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:41.997 [2024-07-22 17:10:43.571492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:41.997 [2024-07-22 17:10:43.571501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:32:41.997 [2024-07-22 17:10:43.571518] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:32:42.256 0% 00:32:42.256 Data Units Read: 0 00:32:42.256 Data Units Written: 0 00:32:42.256 Host Read Commands: 0 00:32:42.256 Host Write Commands: 0 00:32:42.256 Controller Busy Time: 0 minutes 00:32:42.256 Power Cycles: 0 00:32:42.256 Power On Hours: 0 hours 00:32:42.256 Unsafe Shutdowns: 0 00:32:42.256 Unrecoverable Media Errors: 0 00:32:42.256 Lifetime Error Log Entries: 0 00:32:42.256 Warning Temperature Time: 0 minutes 00:32:42.256 Critical Temperature Time: 0 minutes 00:32:42.256 00:32:42.256 Number of Queues 00:32:42.256 ================ 00:32:42.256 Number of I/O Submission Queues: 127 00:32:42.256 Number of I/O Completion Queues: 127 00:32:42.256 00:32:42.256 Active Namespaces 00:32:42.256 ================= 00:32:42.256 Namespace ID:1 00:32:42.256 Error Recovery Timeout: Unlimited 00:32:42.256 Command Set Identifier: NVM (00h) 00:32:42.256 Deallocate: Supported 00:32:42.256 Deallocated/Unwritten Error: Not Supported 00:32:42.256 Deallocated Read Value: Unknown 00:32:42.256 Deallocate in Write Zeroes: Not Supported 00:32:42.256 Deallocated Guard Field: 0xFFFF 00:32:42.256 Flush: Supported 00:32:42.256 Reservation: Supported 00:32:42.256 Namespace Sharing Capabilities: Multiple Controllers 00:32:42.256 Size (in LBAs): 131072 (0GiB) 00:32:42.256 Capacity (in LBAs): 131072 (0GiB) 00:32:42.256 Utilization (in LBAs): 131072 (0GiB) 00:32:42.256 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:42.256 EUI64: ABCDEF0123456789 00:32:42.256 UUID: 7245c8e7-3363-408f-8de1-a447ad376164 00:32:42.256 Thin Provisioning: Not Supported 00:32:42.256 Per-NS Atomic Units: Yes 00:32:42.256 Atomic Boundary Size (Normal): 0 00:32:42.256 Atomic Boundary Size (PFail): 0 00:32:42.256 Atomic Boundary Offset: 0 00:32:42.256 Maximum Single Source Range Length: 65535 00:32:42.256 Maximum Copy Length: 65535 00:32:42.256 Maximum Source Range Count: 1 00:32:42.256 NGUID/EUI64 Never Reused: No 00:32:42.256 Namespace Write Protected: No 00:32:42.256 Number of LBA Formats: 1 00:32:42.256 Current LBA Format: LBA Format #00 00:32:42.256 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:42.256 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:42.256 rmmod nvme_tcp 00:32:42.256 rmmod nvme_fabrics 00:32:42.256 rmmod nvme_keyring 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 82016 ']' 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 82016 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 82016 ']' 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 82016 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82016 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:42.256 killing process with pid 82016 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82016' 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 82016 00:32:42.256 17:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 82016 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:44.158 00:32:44.158 real 0m4.506s 00:32:44.158 user 0m11.691s 00:32:44.158 sys 0m1.044s 00:32:44.158 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:44.159 ************************************ 00:32:44.159 END TEST nvmf_identify 00:32:44.159 ************************************ 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.159 ************************************ 00:32:44.159 START TEST nvmf_perf 00:32:44.159 ************************************ 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:44.159 * Looking for test storage... 00:32:44.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:44.159 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:44.418 Cannot find device "nvmf_tgt_br" 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:44.418 Cannot find device "nvmf_tgt_br2" 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:44.418 Cannot find device "nvmf_tgt_br" 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:44.418 Cannot find device "nvmf_tgt_br2" 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:44.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:44.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:44.418 17:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:44.418 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:44.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:32:44.676 00:32:44.676 --- 10.0.0.2 ping statistics --- 00:32:44.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.676 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:44.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:44.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:32:44.676 00:32:44.676 --- 10.0.0.3 ping statistics --- 00:32:44.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.676 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:44.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:32:44.676 00:32:44.676 --- 10.0.0.1 ping statistics --- 00:32:44.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.676 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=82254 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 82254 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 82254 ']' 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:44.676 17:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:44.676 [2024-07-22 17:10:46.235697] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:44.676 [2024-07-22 17:10:46.235844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.935 [2024-07-22 17:10:46.411045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.193 [2024-07-22 17:10:46.685133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.193 [2024-07-22 17:10:46.685225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.193 [2024-07-22 17:10:46.685242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.193 [2024-07-22 17:10:46.685287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.193 [2024-07-22 17:10:46.685303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.193 [2024-07-22 17:10:46.685541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.193 [2024-07-22 17:10:46.685588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.193 [2024-07-22 17:10:46.686475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.193 [2024-07-22 17:10:46.686515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:45.451 [2024-07-22 17:10:46.971209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:45.708 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:32:46.274 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:32:46.274 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:46.551 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:32:46.551 17:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.810 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:46.810 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:32:46.810 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:46.810 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:46.810 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:47.067 [2024-07-22 17:10:48.488008] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.067 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:47.326 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:47.326 17:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:47.583 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:47.583 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:47.841 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.099 [2024-07-22 17:10:49.564918] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.099 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:48.356 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:48.356 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:48.356 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:48.356 17:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:49.754 Initializing NVMe Controllers 00:32:49.754 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:49.754 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:49.754 Initialization complete. Launching workers. 00:32:49.754 ======================================================== 00:32:49.754 Latency(us) 00:32:49.754 Device Information : IOPS MiB/s Average min max 00:32:49.754 PCIE (0000:00:10.0) NSID 1 from core 0: 22554.00 88.10 1418.82 348.05 7733.09 00:32:49.754 ======================================================== 00:32:49.754 Total : 22554.00 88.10 1418.82 348.05 7733.09 00:32:49.754 00:32:49.754 17:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:51.184 Initializing NVMe Controllers 00:32:51.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:51.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:51.184 Initialization complete. Launching workers. 00:32:51.184 ======================================================== 00:32:51.184 Latency(us) 00:32:51.184 Device Information : IOPS MiB/s Average min max 00:32:51.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3037.00 11.86 327.59 115.46 5338.63 00:32:51.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8119.11 6977.30 12073.06 00:32:51.184 ======================================================== 00:32:51.184 Total : 3161.00 12.35 633.24 115.46 12073.06 00:32:51.184 00:32:51.184 17:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:52.558 Initializing NVMe Controllers 00:32:52.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:52.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:52.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:52.558 Initialization complete. Launching workers. 00:32:52.558 ======================================================== 00:32:52.558 Latency(us) 00:32:52.558 Device Information : IOPS MiB/s Average min max 00:32:52.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7222.95 28.21 4430.73 655.52 8645.15 00:32:52.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3812.14 14.89 8406.06 5765.10 16795.53 00:32:52.558 ======================================================== 00:32:52.558 Total : 11035.08 43.11 5804.03 655.52 16795.53 00:32:52.558 00:32:52.816 17:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:32:52.816 17:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:56.099 Initializing NVMe Controllers 00:32:56.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.099 Controller IO queue size 128, less than required. 00:32:56.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.099 Controller IO queue size 128, less than required. 00:32:56.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:56.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:56.099 Initialization complete. Launching workers. 00:32:56.099 ======================================================== 00:32:56.099 Latency(us) 00:32:56.099 Device Information : IOPS MiB/s Average min max 00:32:56.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1231.97 307.99 107172.28 55660.97 357979.61 00:32:56.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.49 145.37 239708.50 110501.77 640532.10 00:32:56.099 ======================================================== 00:32:56.099 Total : 1813.46 453.36 149670.11 55660.97 640532.10 00:32:56.099 00:32:56.099 17:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:56.099 Initializing NVMe Controllers 00:32:56.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.099 Controller IO queue size 128, less than required. 00:32:56.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.099 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:56.099 Controller IO queue size 128, less than required. 00:32:56.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.099 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:32:56.099 WARNING: Some requested NVMe devices were skipped 00:32:56.099 No valid NVMe controllers or AIO or URING devices found 00:32:56.099 17:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:59.383 Initializing NVMe Controllers 00:32:59.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:59.383 Controller IO queue size 128, less than required. 00:32:59.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:59.383 Controller IO queue size 128, less than required. 00:32:59.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:59.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:59.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:59.383 Initialization complete. Launching workers. 00:32:59.383 00:32:59.383 ==================== 00:32:59.383 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:59.383 TCP transport: 00:32:59.383 polls: 4095 00:32:59.383 idle_polls: 1729 00:32:59.383 sock_completions: 2366 00:32:59.383 nvme_completions: 4767 00:32:59.383 submitted_requests: 7266 00:32:59.383 queued_requests: 1 00:32:59.383 00:32:59.383 ==================== 00:32:59.383 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:59.383 TCP transport: 00:32:59.383 polls: 4964 00:32:59.383 idle_polls: 2155 00:32:59.383 sock_completions: 2809 00:32:59.383 nvme_completions: 5087 00:32:59.383 submitted_requests: 7648 00:32:59.383 queued_requests: 1 00:32:59.383 ======================================================== 00:32:59.383 Latency(us) 00:32:59.383 Device Information : IOPS MiB/s Average min max 00:32:59.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.40 297.85 111949.64 45937.44 367834.27 00:32:59.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1271.40 317.85 103904.90 54856.76 531949.89 00:32:59.383 ======================================================== 00:32:59.383 Total : 2462.80 615.70 107796.62 45937.44 531949.89 00:32:59.383 00:32:59.383 17:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:59.383 17:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.640 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:59.640 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:32:59.640 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:59.897 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=91741069-2721-47a0-ab9d-3cf0ff567d3c 00:32:59.897 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 91741069-2721-47a0-ab9d-3cf0ff567d3c 00:32:59.897 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=91741069-2721-47a0-ab9d-3cf0ff567d3c 00:32:59.897 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:59.897 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:59.897 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:59.897 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:00.154 { 00:33:00.154 "uuid": "91741069-2721-47a0-ab9d-3cf0ff567d3c", 00:33:00.154 "name": "lvs_0", 00:33:00.154 "base_bdev": "Nvme0n1", 00:33:00.154 "total_data_clusters": 1278, 00:33:00.154 "free_clusters": 1278, 00:33:00.154 "block_size": 4096, 00:33:00.154 "cluster_size": 4194304 00:33:00.154 } 00:33:00.154 ]' 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="91741069-2721-47a0-ab9d-3cf0ff567d3c") .free_clusters' 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="91741069-2721-47a0-ab9d-3cf0ff567d3c") .cluster_size' 00:33:00.154 5112 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:33:00.154 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 91741069-2721-47a0-ab9d-3cf0ff567d3c lbd_0 5112 00:33:00.440 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1e518f0a-54a0-4190-920a-17020815911e 00:33:00.440 17:11:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1e518f0a-54a0-4190-920a-17020815911e lvs_n_0 00:33:00.698 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2e53fa8e-477e-4436-b37e-49ba5552cc2d 00:33:00.698 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2e53fa8e-477e-4436-b37e-49ba5552cc2d 00:33:00.698 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=2e53fa8e-477e-4436-b37e-49ba5552cc2d 00:33:00.698 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:00.698 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:33:00.698 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:33:00.698 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:00.955 { 00:33:00.955 "uuid": "91741069-2721-47a0-ab9d-3cf0ff567d3c", 00:33:00.955 "name": "lvs_0", 00:33:00.955 "base_bdev": "Nvme0n1", 00:33:00.955 "total_data_clusters": 1278, 00:33:00.955 "free_clusters": 0, 00:33:00.955 "block_size": 4096, 00:33:00.955 "cluster_size": 4194304 00:33:00.955 }, 00:33:00.955 { 00:33:00.955 "uuid": "2e53fa8e-477e-4436-b37e-49ba5552cc2d", 00:33:00.955 "name": "lvs_n_0", 00:33:00.955 "base_bdev": "1e518f0a-54a0-4190-920a-17020815911e", 00:33:00.955 "total_data_clusters": 1276, 00:33:00.955 "free_clusters": 1276, 00:33:00.955 "block_size": 4096, 00:33:00.955 "cluster_size": 4194304 00:33:00.955 } 00:33:00.955 ]' 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2e53fa8e-477e-4436-b37e-49ba5552cc2d") .free_clusters' 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2e53fa8e-477e-4436-b37e-49ba5552cc2d") .cluster_size' 00:33:00.955 5104 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:33:00.955 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2e53fa8e-477e-4436-b37e-49ba5552cc2d lbd_nest_0 5104 00:33:01.519 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f64d9432-e460-4ef5-a002-a783c2299dfc 00:33:01.519 17:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.519 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:33:01.519 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f64d9432-e460-4ef5-a002-a783c2299dfc 00:33:02.084 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.342 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:33:02.342 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:33:02.342 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:02.342 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:02.342 17:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:02.600 Initializing NVMe Controllers 00:33:02.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:02.600 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:33:02.600 WARNING: Some requested NVMe devices were skipped 00:33:02.600 No valid NVMe controllers or AIO or URING devices found 00:33:02.859 17:11:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:02.859 17:11:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:15.059 Initializing NVMe Controllers 00:33:15.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:15.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:15.059 Initialization complete. Launching workers. 00:33:15.059 ======================================================== 00:33:15.059 Latency(us) 00:33:15.059 Device Information : IOPS MiB/s Average min max 00:33:15.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 744.78 93.10 1341.65 317.24 2358949.40 00:33:15.059 ======================================================== 00:33:15.059 Total : 744.78 93.10 1341.65 317.24 2358949.40 00:33:15.059 00:33:15.059 17:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:15.059 17:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:15.059 17:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:15.059 Initializing NVMe Controllers 00:33:15.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:15.059 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:33:15.059 WARNING: Some requested NVMe devices were skipped 00:33:15.059 No valid NVMe controllers or AIO or URING devices found 00:33:15.059 17:11:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:15.059 17:11:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.064 Initializing NVMe Controllers 00:33:25.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:25.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:25.065 Initialization complete. Launching workers. 00:33:25.065 ======================================================== 00:33:25.065 Latency(us) 00:33:25.065 Device Information : IOPS MiB/s Average min max 00:33:25.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1331.68 166.46 24037.65 7744.96 67579.11 00:33:25.065 ======================================================== 00:33:25.065 Total : 1331.68 166.46 24037.65 7744.96 67579.11 00:33:25.065 00:33:25.065 17:11:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:25.065 17:11:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:25.065 17:11:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.065 Initializing NVMe Controllers 00:33:25.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:25.065 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:33:25.065 WARNING: Some requested NVMe devices were skipped 00:33:25.065 No valid NVMe controllers or AIO or URING devices found 00:33:25.065 17:11:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:25.065 17:11:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:35.080 Initializing NVMe Controllers 00:33:35.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:35.080 Controller IO queue size 128, less than required. 00:33:35.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:35.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:35.080 Initialization complete. Launching workers. 00:33:35.080 ======================================================== 00:33:35.080 Latency(us) 00:33:35.080 Device Information : IOPS MiB/s Average min max 00:33:35.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3554.97 444.37 36004.14 15311.00 131639.58 00:33:35.080 ======================================================== 00:33:35.080 Total : 3554.97 444.37 36004.14 15311.00 131639.58 00:33:35.080 00:33:35.080 17:11:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.339 17:11:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f64d9432-e460-4ef5-a002-a783c2299dfc 00:33:35.665 17:11:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:35.924 17:11:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1e518f0a-54a0-4190-920a-17020815911e 00:33:36.186 17:11:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:36.446 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:36.446 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:36.446 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:36.446 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:36.705 rmmod nvme_tcp 00:33:36.705 rmmod nvme_fabrics 00:33:36.705 rmmod nvme_keyring 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 82254 ']' 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 82254 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 82254 ']' 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 82254 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82254 00:33:36.705 killing process with pid 82254 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82254' 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 82254 00:33:36.705 17:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 82254 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:39.991 ************************************ 00:33:39.991 END TEST nvmf_perf 00:33:39.991 ************************************ 00:33:39.991 00:33:39.991 real 0m55.465s 00:33:39.991 user 3m27.540s 00:33:39.991 sys 0m14.549s 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:39.991 17:11:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.992 ************************************ 00:33:39.992 START TEST nvmf_fio_host 00:33:39.992 ************************************ 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:39.992 * Looking for test storage... 00:33:39.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:39.992 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:39.993 Cannot find device "nvmf_tgt_br" 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:39.993 Cannot find device "nvmf_tgt_br2" 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:39.993 Cannot find device "nvmf_tgt_br" 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:39.993 Cannot find device "nvmf_tgt_br2" 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:39.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:39.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:39.993 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:40.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:33:40.252 00:33:40.252 --- 10.0.0.2 ping statistics --- 00:33:40.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.252 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:40.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:40.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:33:40.252 00:33:40.252 --- 10.0.0.3 ping statistics --- 00:33:40.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.252 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:40.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:33:40.252 00:33:40.252 --- 10.0.0.1 ping statistics --- 00:33:40.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.252 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=83112 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 83112 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 83112 ']' 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:40.252 17:11:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.252 [2024-07-22 17:11:41.792653] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:40.252 [2024-07-22 17:11:41.793028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.510 [2024-07-22 17:11:41.969889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.768 [2024-07-22 17:11:42.334133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.768 [2024-07-22 17:11:42.334503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.768 [2024-07-22 17:11:42.334641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.768 [2024-07-22 17:11:42.334708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.768 [2024-07-22 17:11:42.334747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.768 [2024-07-22 17:11:42.334998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.768 [2024-07-22 17:11:42.335178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.768 [2024-07-22 17:11:42.335778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.768 [2024-07-22 17:11:42.335802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:41.026 [2024-07-22 17:11:42.621466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:33:41.284 17:11:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:41.284 17:11:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:33:41.284 17:11:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:41.542 [2024-07-22 17:11:43.022799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.542 17:11:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:41.542 17:11:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:41.542 17:11:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.542 17:11:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:41.800 Malloc1 00:33:42.059 17:11:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:42.317 17:11:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:42.317 17:11:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.573 [2024-07-22 17:11:44.098049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.573 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:42.830 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:43.088 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:43.088 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:43.088 17:11:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:43.088 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:43.088 fio-3.35 00:33:43.088 Starting 1 thread 00:33:45.617 00:33:45.617 test: (groupid=0, jobs=1): err= 0: pid=83186: Mon Jul 22 17:11:47 2024 00:33:45.617 read: IOPS=7361, BW=28.8MiB/s (30.2MB/s)(57.7MiB/2008msec) 00:33:45.617 slat (usec): min=2, max=227, avg= 2.99, stdev= 2.60 00:33:45.617 clat (usec): min=1976, max=16123, avg=9057.27, stdev=865.06 00:33:45.617 lat (usec): min=2017, max=16126, avg=9060.26, stdev=864.79 00:33:45.617 clat percentiles (usec): 00:33:45.617 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8455], 00:33:45.617 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:33:45.617 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10683], 00:33:45.617 | 99.00th=[11863], 99.50th=[12256], 99.90th=[15008], 99.95th=[15270], 00:33:45.617 | 99.99th=[16057] 00:33:45.617 bw ( KiB/s): min=28576, max=30672, per=100.00%, avg=29454.00, stdev=901.71, samples=4 00:33:45.617 iops : min= 7144, max= 7668, avg=7363.50, stdev=225.43, samples=4 00:33:45.617 write: IOPS=7332, BW=28.6MiB/s (30.0MB/s)(57.5MiB/2008msec); 0 zone resets 00:33:45.617 slat (usec): min=2, max=169, avg= 3.22, stdev= 1.99 00:33:45.617 clat (usec): min=1836, max=16040, avg=8284.09, stdev=803.38 00:33:45.617 lat (usec): min=1847, max=16043, avg=8287.30, stdev=803.23 00:33:45.617 clat percentiles (usec): 00:33:45.617 | 1.00th=[ 6980], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7767], 00:33:45.617 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8291], 00:33:45.617 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9765], 00:33:45.617 | 99.00th=[10945], 99.50th=[11338], 99.90th=[13829], 99.95th=[15139], 00:33:45.617 | 99.99th=[15926] 00:33:45.617 bw ( KiB/s): min=28368, max=30072, per=99.91%, avg=29302.00, stdev=848.69, samples=4 00:33:45.617 iops : min= 7092, max= 7518, avg=7325.50, stdev=212.17, samples=4 00:33:45.617 lat (msec) : 2=0.01%, 4=0.13%, 10=92.70%, 20=7.16% 00:33:45.617 cpu : usr=70.70%, sys=22.17%, ctx=7, majf=0, minf=1540 00:33:45.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:45.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:45.617 issued rwts: total=14782,14723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:45.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:45.617 00:33:45.617 Run status group 0 (all jobs): 00:33:45.617 READ: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=57.7MiB (60.5MB), run=2008-2008msec 00:33:45.617 WRITE: bw=28.6MiB/s (30.0MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=57.5MiB (60.3MB), run=2008-2008msec 00:33:45.617 ----------------------------------------------------- 00:33:45.617 Suppressions used: 00:33:45.617 count bytes template 00:33:45.617 1 57 /usr/src/fio/parse.c 00:33:45.617 1 8 libtcmalloc_minimal.so 00:33:45.617 ----------------------------------------------------- 00:33:45.617 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:45.875 17:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:45.875 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:45.875 fio-3.35 00:33:45.875 Starting 1 thread 00:33:48.404 00:33:48.404 test: (groupid=0, jobs=1): err= 0: pid=83223: Mon Jul 22 17:11:49 2024 00:33:48.404 read: IOPS=5571, BW=87.1MiB/s (91.3MB/s)(175MiB/2009msec) 00:33:48.404 slat (usec): min=3, max=128, avg= 4.98, stdev= 3.14 00:33:48.404 clat (usec): min=3304, max=31322, avg=12904.05, stdev=4309.01 00:33:48.404 lat (usec): min=3308, max=31327, avg=12909.03, stdev=4309.76 00:33:48.404 clat percentiles (usec): 00:33:48.404 | 1.00th=[ 5604], 5.00th=[ 7046], 10.00th=[ 8029], 20.00th=[ 9503], 00:33:48.404 | 30.00th=[10552], 40.00th=[11207], 50.00th=[12125], 60.00th=[13042], 00:33:48.404 | 70.00th=[14484], 80.00th=[16319], 90.00th=[18744], 95.00th=[20841], 00:33:48.404 | 99.00th=[27395], 99.50th=[29754], 99.90th=[31065], 99.95th=[31327], 00:33:48.404 | 99.99th=[31327] 00:33:48.404 bw ( KiB/s): min=37472, max=55776, per=52.86%, avg=47120.00, stdev=7801.94, samples=4 00:33:48.404 iops : min= 2342, max= 3486, avg=2945.00, stdev=487.62, samples=4 00:33:48.404 write: IOPS=3247, BW=50.7MiB/s (53.2MB/s)(96.1MiB/1894msec); 0 zone resets 00:33:48.404 slat (usec): min=35, max=550, avg=43.69, stdev=11.56 00:33:48.404 clat (usec): min=8324, max=35517, avg=17470.70, stdev=4679.04 00:33:48.404 lat (usec): min=8364, max=35563, avg=17514.39, stdev=4682.06 00:33:48.404 clat percentiles (usec): 00:33:48.404 | 1.00th=[10159], 5.00th=[11600], 10.00th=[12256], 20.00th=[13435], 00:33:48.404 | 30.00th=[14484], 40.00th=[15664], 50.00th=[16712], 60.00th=[17695], 00:33:48.404 | 70.00th=[19006], 80.00th=[20579], 90.00th=[24249], 95.00th=[27132], 00:33:48.404 | 99.00th=[31327], 99.50th=[33162], 99.90th=[34866], 99.95th=[35390], 00:33:48.404 | 99.99th=[35390] 00:33:48.405 bw ( KiB/s): min=37888, max=57056, per=93.65%, avg=48656.00, stdev=8374.81, samples=4 00:33:48.405 iops : min= 2368, max= 3566, avg=3041.00, stdev=523.43, samples=4 00:33:48.405 lat (msec) : 4=0.06%, 10=16.36%, 20=71.01%, 50=12.56% 00:33:48.405 cpu : usr=81.23%, sys=15.13%, ctx=15, majf=0, minf=1988 00:33:48.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:48.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:48.405 issued rwts: total=11193,6150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:48.405 00:33:48.405 Run status group 0 (all jobs): 00:33:48.405 READ: bw=87.1MiB/s (91.3MB/s), 87.1MiB/s-87.1MiB/s (91.3MB/s-91.3MB/s), io=175MiB (183MB), run=2009-2009msec 00:33:48.405 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=96.1MiB (101MB), run=1894-1894msec 00:33:48.662 ----------------------------------------------------- 00:33:48.662 Suppressions used: 00:33:48.662 count bytes template 00:33:48.663 1 57 /usr/src/fio/parse.c 00:33:48.663 874 83904 /usr/src/fio/iolog.c 00:33:48.663 1 8 libtcmalloc_minimal.so 00:33:48.663 ----------------------------------------------------- 00:33:48.663 00:33:48.663 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:33:48.920 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:33:49.179 Nvme0n1 00:33:49.179 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:49.437 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=29d86117-6479-4340-8746-6676f7c54f42 00:33:49.437 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 29d86117-6479-4340-8746-6676f7c54f42 00:33:49.437 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=29d86117-6479-4340-8746-6676f7c54f42 00:33:49.437 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:49.437 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:49.437 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:49.437 17:11:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:50.002 { 00:33:50.002 "uuid": "29d86117-6479-4340-8746-6676f7c54f42", 00:33:50.002 "name": "lvs_0", 00:33:50.002 "base_bdev": "Nvme0n1", 00:33:50.002 "total_data_clusters": 4, 00:33:50.002 "free_clusters": 4, 00:33:50.002 "block_size": 4096, 00:33:50.002 "cluster_size": 1073741824 00:33:50.002 } 00:33:50.002 ]' 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="29d86117-6479-4340-8746-6676f7c54f42") .free_clusters' 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="29d86117-6479-4340-8746-6676f7c54f42") .cluster_size' 00:33:50.002 4096 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:33:50.002 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:33:50.320 4fa1794c-d09d-419e-80ca-cd5eaf2ec02e 00:33:50.320 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:50.580 17:11:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:50.837 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:51.094 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:51.095 17:11:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:51.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:51.095 fio-3.35 00:33:51.095 Starting 1 thread 00:33:53.621 00:33:53.621 test: (groupid=0, jobs=1): err= 0: pid=83331: Mon Jul 22 17:11:55 2024 00:33:53.621 read: IOPS=5316, BW=20.8MiB/s (21.8MB/s)(41.7MiB/2010msec) 00:33:53.621 slat (usec): min=2, max=192, avg= 2.87, stdev= 2.78 00:33:53.621 clat (usec): min=3629, max=23299, avg=12574.51, stdev=1403.96 00:33:53.621 lat (usec): min=3634, max=23301, avg=12577.37, stdev=1403.86 00:33:53.621 clat percentiles (usec): 00:33:53.621 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:33:53.621 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:33:53.621 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14091], 95.00th=[14877], 00:33:53.621 | 99.00th=[16909], 99.50th=[19268], 99.90th=[21890], 99.95th=[22152], 00:33:53.621 | 99.99th=[23200] 00:33:53.621 bw ( KiB/s): min=20504, max=21912, per=99.79%, avg=21220.00, stdev=595.97, samples=4 00:33:53.621 iops : min= 5126, max= 5478, avg=5305.00, stdev=148.99, samples=4 00:33:53.621 write: IOPS=5299, BW=20.7MiB/s (21.7MB/s)(41.6MiB/2010msec); 0 zone resets 00:33:53.621 slat (usec): min=2, max=129, avg= 3.02, stdev= 1.72 00:33:53.621 clat (usec): min=2560, max=20972, avg=11395.91, stdev=1330.19 00:33:53.621 lat (usec): min=2569, max=20975, avg=11398.93, stdev=1330.17 00:33:53.621 clat percentiles (usec): 00:33:53.621 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:33:53.621 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:33:53.621 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12780], 95.00th=[13566], 00:33:53.621 | 99.00th=[15664], 99.50th=[18482], 99.90th=[20317], 99.95th=[20579], 00:33:53.621 | 99.99th=[20841] 00:33:53.621 bw ( KiB/s): min=20600, max=21824, per=100.00%, avg=21202.00, stdev=525.83, samples=4 00:33:53.621 iops : min= 5150, max= 5456, avg=5300.50, stdev=131.46, samples=4 00:33:53.621 lat (msec) : 4=0.05%, 10=4.45%, 20=95.22%, 50=0.28% 00:33:53.621 cpu : usr=71.93%, sys=23.05%, ctx=10, majf=0, minf=1539 00:33:53.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:33:53.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:53.621 issued rwts: total=10686,10652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:53.621 00:33:53.621 Run status group 0 (all jobs): 00:33:53.621 READ: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=41.7MiB (43.8MB), run=2010-2010msec 00:33:53.621 WRITE: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.6MiB (43.6MB), run=2010-2010msec 00:33:53.880 ----------------------------------------------------- 00:33:53.880 Suppressions used: 00:33:53.880 count bytes template 00:33:53.880 1 58 /usr/src/fio/parse.c 00:33:53.880 1 8 libtcmalloc_minimal.so 00:33:53.880 ----------------------------------------------------- 00:33:53.880 00:33:53.880 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:54.137 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:54.395 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8fc174a5-d3c0-4690-b1ed-2b063bc30a05 00:33:54.395 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8fc174a5-d3c0-4690-b1ed-2b063bc30a05 00:33:54.395 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8fc174a5-d3c0-4690-b1ed-2b063bc30a05 00:33:54.395 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:54.395 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:54.395 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:54.395 17:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:54.654 { 00:33:54.654 "uuid": "29d86117-6479-4340-8746-6676f7c54f42", 00:33:54.654 "name": "lvs_0", 00:33:54.654 "base_bdev": "Nvme0n1", 00:33:54.654 "total_data_clusters": 4, 00:33:54.654 "free_clusters": 0, 00:33:54.654 "block_size": 4096, 00:33:54.654 "cluster_size": 1073741824 00:33:54.654 }, 00:33:54.654 { 00:33:54.654 "uuid": "8fc174a5-d3c0-4690-b1ed-2b063bc30a05", 00:33:54.654 "name": "lvs_n_0", 00:33:54.654 "base_bdev": "4fa1794c-d09d-419e-80ca-cd5eaf2ec02e", 00:33:54.654 "total_data_clusters": 1022, 00:33:54.654 "free_clusters": 1022, 00:33:54.654 "block_size": 4096, 00:33:54.654 "cluster_size": 4194304 00:33:54.654 } 00:33:54.654 ]' 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8fc174a5-d3c0-4690-b1ed-2b063bc30a05") .free_clusters' 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8fc174a5-d3c0-4690-b1ed-2b063bc30a05") .cluster_size' 00:33:54.654 4088 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:33:54.654 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:33:54.912 6dc59476-a685-43d2-aaa4-05228784ba09 00:33:55.169 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:55.427 17:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:55.685 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:55.943 17:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:56.203 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:56.203 fio-3.35 00:33:56.203 Starting 1 thread 00:33:58.739 00:33:58.739 test: (groupid=0, jobs=1): err= 0: pid=83407: Mon Jul 22 17:12:00 2024 00:33:58.739 read: IOPS=4954, BW=19.4MiB/s (20.3MB/s)(38.9MiB/2011msec) 00:33:58.739 slat (nsec): min=1911, max=239038, avg=3448.12, stdev=3620.99 00:33:58.739 clat (usec): min=3493, max=23213, avg=13468.29, stdev=1267.04 00:33:58.739 lat (usec): min=3500, max=23216, avg=13471.74, stdev=1266.81 00:33:58.739 clat percentiles (usec): 00:33:58.739 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[12125], 20.00th=[12649], 00:33:58.739 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:33:58.739 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:33:58.739 | 99.00th=[16057], 99.50th=[16581], 99.90th=[21627], 99.95th=[21890], 00:33:58.739 | 99.99th=[23200] 00:33:58.739 bw ( KiB/s): min=18856, max=20344, per=99.86%, avg=19790.00, stdev=660.97, samples=4 00:33:58.739 iops : min= 4714, max= 5086, avg=4947.50, stdev=165.24, samples=4 00:33:58.739 write: IOPS=4948, BW=19.3MiB/s (20.3MB/s)(38.9MiB/2011msec); 0 zone resets 00:33:58.739 slat (usec): min=2, max=202, avg= 3.64, stdev= 2.98 00:33:58.739 clat (usec): min=2314, max=23344, avg=12222.28, stdev=1260.09 00:33:58.739 lat (usec): min=2325, max=23346, avg=12225.93, stdev=1260.14 00:33:58.739 clat percentiles (usec): 00:33:58.739 | 1.00th=[ 7635], 5.00th=[10552], 10.00th=[10945], 20.00th=[11469], 00:33:58.739 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:33:58.739 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:33:58.739 | 99.00th=[14877], 99.50th=[15401], 99.90th=[20317], 99.95th=[21890], 00:33:58.739 | 99.99th=[23462] 00:33:58.739 bw ( KiB/s): min=19632, max=19848, per=99.85%, avg=19764.00, stdev=92.49, samples=4 00:33:58.739 iops : min= 4908, max= 4962, avg=4941.00, stdev=23.12, samples=4 00:33:58.739 lat (msec) : 4=0.05%, 10=1.58%, 20=98.20%, 50=0.17% 00:33:58.739 cpu : usr=73.78%, sys=21.09%, ctx=165, majf=0, minf=1539 00:33:58.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:58.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.739 issued rwts: total=9963,9951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.739 00:33:58.739 Run status group 0 (all jobs): 00:33:58.739 READ: bw=19.4MiB/s (20.3MB/s), 19.4MiB/s-19.4MiB/s (20.3MB/s-20.3MB/s), io=38.9MiB (40.8MB), run=2011-2011msec 00:33:58.739 WRITE: bw=19.3MiB/s (20.3MB/s), 19.3MiB/s-19.3MiB/s (20.3MB/s-20.3MB/s), io=38.9MiB (40.8MB), run=2011-2011msec 00:33:58.739 ----------------------------------------------------- 00:33:58.739 Suppressions used: 00:33:58.739 count bytes template 00:33:58.739 1 58 /usr/src/fio/parse.c 00:33:58.739 1 8 libtcmalloc_minimal.so 00:33:58.739 ----------------------------------------------------- 00:33:58.739 00:33:58.739 17:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:58.998 17:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:58.998 17:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:59.256 17:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:59.514 17:12:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:59.772 17:12:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:00.030 17:12:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:00.598 rmmod nvme_tcp 00:34:00.598 rmmod nvme_fabrics 00:34:00.598 rmmod nvme_keyring 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 83112 ']' 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 83112 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 83112 ']' 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 83112 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83112 00:34:00.598 killing process with pid 83112 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83112' 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 83112 00:34:00.598 17:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 83112 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.501 17:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:02.501 ************************************ 00:34:02.501 END TEST nvmf_fio_host 00:34:02.501 ************************************ 00:34:02.501 00:34:02.501 real 0m22.878s 00:34:02.501 user 1m36.815s 00:34:02.501 sys 0m5.478s 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.501 ************************************ 00:34:02.501 START TEST nvmf_failover 00:34:02.501 ************************************ 00:34:02.501 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:02.760 * Looking for test storage... 00:34:02.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.760 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:02.761 Cannot find device "nvmf_tgt_br" 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:02.761 Cannot find device "nvmf_tgt_br2" 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:02.761 Cannot find device "nvmf_tgt_br" 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:02.761 Cannot find device "nvmf_tgt_br2" 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:02.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:02.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:02.761 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:03.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:34:03.020 00:34:03.020 --- 10.0.0.2 ping statistics --- 00:34:03.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.020 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:03.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:03.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.225 ms 00:34:03.020 00:34:03.020 --- 10.0.0.3 ping statistics --- 00:34:03.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.020 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:03.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:34:03.020 00:34:03.020 --- 10.0.0.1 ping statistics --- 00:34:03.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.020 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=83663 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 83663 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:03.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 83663 ']' 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:03.020 17:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:03.289 [2024-07-22 17:12:04.713230] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:03.289 [2024-07-22 17:12:04.713429] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:03.289 [2024-07-22 17:12:04.900514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:03.855 [2024-07-22 17:12:05.172457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:03.855 [2024-07-22 17:12:05.172533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:03.855 [2024-07-22 17:12:05.172550] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:03.855 [2024-07-22 17:12:05.172566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:03.855 [2024-07-22 17:12:05.172579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:03.855 [2024-07-22 17:12:05.172744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:03.855 [2024-07-22 17:12:05.173593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.855 [2024-07-22 17:12:05.173614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:03.855 [2024-07-22 17:12:05.471974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:34:04.113 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:04.113 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:04.113 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:04.113 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:04.113 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:04.113 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.113 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:04.372 [2024-07-22 17:12:05.956530] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.630 17:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:04.888 Malloc0 00:34:04.888 17:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:05.145 17:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:05.403 17:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.660 [2024-07-22 17:12:07.048957] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.660 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:05.917 [2024-07-22 17:12:07.305332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:05.917 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:06.175 [2024-07-22 17:12:07.561732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=83721 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 83721 /var/tmp/bdevperf.sock 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 83721 ']' 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:06.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:06.175 17:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:07.110 17:12:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:07.110 17:12:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:07.111 17:12:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:07.369 NVMe0n1 00:34:07.369 17:12:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:07.937 00:34:07.937 17:12:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=83746 00:34:07.937 17:12:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:07.937 17:12:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:08.871 17:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.129 [2024-07-22 17:12:10.565404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:09.129 [2024-07-22 17:12:10.567684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:09.129 [2024-07-22 17:12:10.567707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:09.129 17:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:12.464 17:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.464 00:34:12.464 17:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:12.722 17:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:16.056 17:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.056 [2024-07-22 17:12:17.523078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.056 17:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:16.990 17:12:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:17.248 17:12:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 83746 00:34:23.804 0 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 83721 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 83721 ']' 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 83721 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83721 00:34:23.804 killing process with pid 83721 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83721' 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 83721 00:34:23.804 17:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 83721 00:34:24.746 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:24.746 [2024-07-22 17:12:07.712852] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:24.746 [2024-07-22 17:12:07.713104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83721 ] 00:34:24.746 [2024-07-22 17:12:07.885158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.746 [2024-07-22 17:12:08.144917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.746 [2024-07-22 17:12:08.417774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:34:24.746 Running I/O for 15 seconds... 00:34:24.746 [2024-07-22 17:12:10.567175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.746 [2024-07-22 17:12:10.567287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.746 [2024-07-22 17:12:10.567317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.746 [2024-07-22 17:12:10.567346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.746 [2024-07-22 17:12:10.567365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.746 [2024-07-22 17:12:10.567390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.746 [2024-07-22 17:12:10.567425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.746 [2024-07-22 17:12:10.567457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.746 [2024-07-22 17:12:10.567483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:34:24.746 [2024-07-22 17:12:10.567785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.746 [2024-07-22 17:12:10.567813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.746 [2024-07-22 17:12:10.567880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.746 [2024-07-22 17:12:10.567901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.746 [2024-07-22 17:12:10.567926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.567946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.567971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.567991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.568752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.568797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.568842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.568890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.568939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.568964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.568995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.747 [2024-07-22 17:12:10.569519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.569562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.569605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.569651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.569696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.747 [2024-07-22 17:12:10.569740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.747 [2024-07-22 17:12:10.569773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.569828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.569855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.569874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.569920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.569953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.569972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.569997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.570015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.570061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.570104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.570190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.570232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.570292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.570957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.570981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.748 [2024-07-22 17:12:10.571368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.571412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.571455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.748 [2024-07-22 17:12:10.571502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.748 [2024-07-22 17:12:10.571526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.571545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.571590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.571633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.571683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.571726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.571768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.571811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.571878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.571932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.571960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.571984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.572596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.749 [2024-07-22 17:12:10.572967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.572991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.573011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.573035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.573054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.749 [2024-07-22 17:12:10.573080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.749 [2024-07-22 17:12:10.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:10.573804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.573824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:34:24.750 [2024-07-22 17:12:10.573851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.750 [2024-07-22 17:12:10.573866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.750 [2024-07-22 17:12:10.573883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:34:24.750 [2024-07-22 17:12:10.573903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:10.574254] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:34:24.750 [2024-07-22 17:12:10.574290] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:24.750 [2024-07-22 17:12:10.574312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.750 [2024-07-22 17:12:10.577965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.750 [2024-07-22 17:12:10.578036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:34:24.750 [2024-07-22 17:12:10.615701] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:24.750 [2024-07-22 17:12:14.228423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.228965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.228997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.229012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.229030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.229057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.229074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.229090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.229108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.750 [2024-07-22 17:12:14.229124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.750 [2024-07-22 17:12:14.229142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.751 [2024-07-22 17:12:14.229681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.229977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.229995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.230011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.230028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.230044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.230061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.230077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.230095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.230111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.230128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.230144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.230161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.230177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.751 [2024-07-22 17:12:14.230195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.751 [2024-07-22 17:12:14.230211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.230531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.230972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.230989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.231005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.231038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.752 [2024-07-22 17:12:14.231072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.752 [2024-07-22 17:12:14.231506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.752 [2024-07-22 17:12:14.231522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.231555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.231588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.231621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.231655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.753 [2024-07-22 17:12:14.231959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.231983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.232001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.232037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.232088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.232125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.232163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.232207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.753 [2024-07-22 17:12:14.232245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(5) to be set 00:34:24.753 [2024-07-22 17:12:14.232299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60376 len:8 PRP1 0x0 PRP2 0x0 00:34:24.753 [2024-07-22 17:12:14.232347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60896 len:8 PRP1 0x0 PRP2 0x0 00:34:24.753 [2024-07-22 17:12:14.232411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60904 len:8 PRP1 0x0 PRP2 0x0 00:34:24.753 [2024-07-22 17:12:14.232472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60912 len:8 PRP1 0x0 PRP2 0x0 00:34:24.753 [2024-07-22 17:12:14.232532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60920 len:8 PRP1 0x0 PRP2 0x0 00:34:24.753 [2024-07-22 17:12:14.232594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60928 len:8 PRP1 0x0 PRP2 0x0 00:34:24.753 [2024-07-22 17:12:14.232661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60936 len:8 PRP1 0x0 PRP2 0x0 00:34:24.753 [2024-07-22 17:12:14.232727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.753 [2024-07-22 17:12:14.232744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.753 [2024-07-22 17:12:14.232757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.753 [2024-07-22 17:12:14.232770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60944 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.232787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.232804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.232816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.232830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60952 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.232846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.232863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.232876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.232890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60960 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.232906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.232923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.232936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.232950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60968 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.232966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.232995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60976 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60984 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60992 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61000 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61008 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61016 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61024 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61032 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61040 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61048 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61056 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61064 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61072 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.233738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.754 [2024-07-22 17:12:14.233749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.754 [2024-07-22 17:12:14.233762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61080 len:8 PRP1 0x0 PRP2 0x0 00:34:24.754 [2024-07-22 17:12:14.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.234099] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:34:24.754 [2024-07-22 17:12:14.234121] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:24.754 [2024-07-22 17:12:14.234190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:14.234210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.234228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:14.234254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.234288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:14.234306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.234325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:14.234342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:14.234360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.754 [2024-07-22 17:12:14.234421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:34:24.754 [2024-07-22 17:12:14.237853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.754 [2024-07-22 17:12:14.274560] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:24.754 [2024-07-22 17:12:18.761958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:18.762042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:18.762067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:18.762131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:18.762152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:18.762171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.754 [2024-07-22 17:12:18.762192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.754 [2024-07-22 17:12:18.762211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.762230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:34:24.755 [2024-07-22 17:12:18.763238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.763976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.763998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.755 [2024-07-22 17:12:18.764017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.764960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.764982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.765001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.765022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.765049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.765077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.765097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.755 [2024-07-22 17:12:18.765119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.755 [2024-07-22 17:12:18.765138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.765530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.765959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.765980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.766000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.766047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.766088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.766148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.766190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.756 [2024-07-22 17:12:18.766231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.756 [2024-07-22 17:12:18.766838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.756 [2024-07-22 17:12:18.766860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.766879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.766901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.766921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.766943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.766962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.766984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.757 [2024-07-22 17:12:18.767600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.767966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.767985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.757 [2024-07-22 17:12:18.768557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.757 [2024-07-22 17:12:18.768576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.768598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.758 [2024-07-22 17:12:18.768617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.768637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:34:24.758 [2024-07-22 17:12:18.768662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.768677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.768694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108824 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.768715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.768736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.768750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.768766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109216 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.768785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.768811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.768826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.768842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109224 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.768875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.768895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.768909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.768924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109232 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.768944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.768973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.768988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.769005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109240 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.769024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.769043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.769057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.769072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109248 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.769091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.769110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.769124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.769140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109256 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.769177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.769191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.769208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109264 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.769227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.769257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.758 [2024-07-22 17:12:18.769273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.758 [2024-07-22 17:12:18.769288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109272 len:8 PRP1 0x0 PRP2 0x0 00:34:24.758 [2024-07-22 17:12:18.769307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.758 [2024-07-22 17:12:18.769655] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:34:24.758 [2024-07-22 17:12:18.769680] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:24.758 [2024-07-22 17:12:18.769711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.758 [2024-07-22 17:12:18.773569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.758 [2024-07-22 17:12:18.773643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:34:24.758 [2024-07-22 17:12:18.806584] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:24.758 00:34:24.758 Latency(us) 00:34:24.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.758 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:24.758 Verification LBA range: start 0x0 length 0x4000 00:34:24.758 NVMe0n1 : 15.01 8099.52 31.64 222.57 0.00 15348.08 573.44 24092.28 00:34:24.758 =================================================================================================================== 00:34:24.758 Total : 8099.52 31.64 222.57 0.00 15348.08 573.44 24092.28 00:34:24.758 Received shutdown signal, test time was about 15.000000 seconds 00:34:24.758 00:34:24.758 Latency(us) 00:34:24.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.758 =================================================================================================================== 00:34:24.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=83928 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 83928 /var/tmp/bdevperf.sock 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 83928 ']' 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:24.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:24.758 17:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:25.701 17:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:25.701 17:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:25.701 17:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:25.960 [2024-07-22 17:12:27.412034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:25.960 17:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:26.218 [2024-07-22 17:12:27.716397] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:26.218 17:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:26.476 NVMe0n1 00:34:26.476 17:12:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.044 00:34:27.044 17:12:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.303 00:34:27.303 17:12:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:27.303 17:12:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:27.572 17:12:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.830 17:12:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:31.117 17:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:31.117 17:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:31.117 17:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=84006 00:34:31.117 17:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:31.117 17:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 84006 00:34:32.490 0 00:34:32.490 17:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:32.490 [2024-07-22 17:12:26.146434] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:32.490 [2024-07-22 17:12:26.147428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83928 ] 00:34:32.490 [2024-07-22 17:12:26.325498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.490 [2024-07-22 17:12:26.669995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.490 [2024-07-22 17:12:26.946371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:34:32.490 [2024-07-22 17:12:29.257864] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:32.491 [2024-07-22 17:12:29.258033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.491 [2024-07-22 17:12:29.258068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.491 [2024-07-22 17:12:29.258094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.491 [2024-07-22 17:12:29.258115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.491 [2024-07-22 17:12:29.258135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.491 [2024-07-22 17:12:29.258156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.491 [2024-07-22 17:12:29.258175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.491 [2024-07-22 17:12:29.258200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.491 [2024-07-22 17:12:29.258219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.491 [2024-07-22 17:12:29.258303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.491 [2024-07-22 17:12:29.258343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:34:32.491 [2024-07-22 17:12:29.262993] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:32.491 Running I/O for 1 seconds... 00:34:32.491 00:34:32.491 Latency(us) 00:34:32.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.491 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:32.491 Verification LBA range: start 0x0 length 0x4000 00:34:32.491 NVMe0n1 : 1.01 8002.32 31.26 0.00 0.00 15888.66 2044.10 15978.30 00:34:32.491 =================================================================================================================== 00:34:32.491 Total : 8002.32 31.26 0.00 0.00 15888.66 2044.10 15978.30 00:34:32.491 17:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:32.491 17:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:32.491 17:12:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:32.749 17:12:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:32.749 17:12:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:33.007 17:12:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:33.265 17:12:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 83928 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 83928 ']' 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 83928 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83928 00:34:36.589 killing process with pid 83928 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83928' 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 83928 00:34:36.589 17:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 83928 00:34:37.985 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:37.985 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:38.244 rmmod nvme_tcp 00:34:38.244 rmmod nvme_fabrics 00:34:38.244 rmmod nvme_keyring 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 83663 ']' 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 83663 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 83663 ']' 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 83663 00:34:38.244 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:38.503 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:38.503 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83663 00:34:38.503 killing process with pid 83663 00:34:38.503 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:38.503 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:38.503 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83663' 00:34:38.503 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 83663 00:34:38.503 17:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 83663 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:40.408 00:34:40.408 real 0m37.557s 00:34:40.408 user 2m21.871s 00:34:40.408 sys 0m6.877s 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:40.408 ************************************ 00:34:40.408 END TEST nvmf_failover 00:34:40.408 ************************************ 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.408 ************************************ 00:34:40.408 START TEST nvmf_host_discovery 00:34:40.408 ************************************ 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:40.408 * Looking for test storage... 00:34:40.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:40.408 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:40.409 Cannot find device "nvmf_tgt_br" 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:40.409 Cannot find device "nvmf_tgt_br2" 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:40.409 Cannot find device "nvmf_tgt_br" 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:40.409 Cannot find device "nvmf_tgt_br2" 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:40.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:40.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:40.409 17:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:40.409 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:40.667 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:40.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:40.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:34:40.668 00:34:40.668 --- 10.0.0.2 ping statistics --- 00:34:40.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.668 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:40.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:40.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:34:40.668 00:34:40.668 --- 10.0.0.3 ping statistics --- 00:34:40.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.668 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:40.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:40.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:34:40.668 00:34:40.668 --- 10.0.0.1 ping statistics --- 00:34:40.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.668 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=84298 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 84298 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 84298 ']' 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:40.668 17:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.927 [2024-07-22 17:12:42.378622] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:40.927 [2024-07-22 17:12:42.378789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.185 [2024-07-22 17:12:42.568339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.444 [2024-07-22 17:12:42.898859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.444 [2024-07-22 17:12:42.898917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.444 [2024-07-22 17:12:42.898931] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.444 [2024-07-22 17:12:42.898945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.444 [2024-07-22 17:12:42.898956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.444 [2024-07-22 17:12:42.899006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.703 [2024-07-22 17:12:43.159860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.962 [2024-07-22 17:12:43.378978] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.962 [2024-07-22 17:12:43.387131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.962 null0 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.962 null1 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=84330 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 84330 /tmp/host.sock 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 84330 ']' 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:41.962 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:41.962 17:12:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.962 [2024-07-22 17:12:43.544880] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:41.962 [2024-07-22 17:12:43.545057] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84330 ] 00:34:42.221 [2024-07-22 17:12:43.729902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.480 [2024-07-22 17:12:43.978008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.739 [2024-07-22 17:12:44.249441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:34:42.999 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.000 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.260 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.520 [2024-07-22 17:12:44.879585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.520 17:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:34:43.520 17:12:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:44.101 [2024-07-22 17:12:45.503564] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:44.101 [2024-07-22 17:12:45.503617] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:44.101 [2024-07-22 17:12:45.503663] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:44.101 [2024-07-22 17:12:45.509647] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:44.101 [2024-07-22 17:12:45.575880] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:44.101 [2024-07-22 17:12:45.575933] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.668 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 [2024-07-22 17:12:46.438153] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:44.927 [2024-07-22 17:12:46.438810] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:44.927 [2024-07-22 17:12:46.438865] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:44.927 [2024-07-22 17:12:46.444822] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:44.927 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:44.927 [2024-07-22 17:12:46.507335] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:44.927 [2024-07-22 17:12:46.507400] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:44.927 [2024-07-22 17:12:46.507413] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:45.186 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.186 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:45.186 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.186 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:45.186 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:45.186 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.187 [2024-07-22 17:12:46.662942] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:45.187 [2024-07-22 17:12:46.662993] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.187 [2024-07-22 17:12:46.668958] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:45.187 [2024-07-22 17:12:46.669137] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io. 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:45.187 spdk:cnode0:10.0.0.2:4421 found again 00:34:45.187 [2024-07-22 17:12:46.669398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.187 [2024-07-22 17:12:46.669489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.187 [2024-07-22 17:12:46.669621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.187 [2024-07-22 17:12:46.669640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.187 [2024-07-22 17:12:46.669655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.187 [2024-07-22 17:12:46.669668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.187 [2024-07-22 17:12:46.669682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.187 [2024-07-22 17:12:46.669694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.187 [2024-07-22 17:12:46.669707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.187 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.188 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:45.188 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.445 17:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.445 17:12:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.822 [2024-07-22 17:12:48.055009] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:46.822 [2024-07-22 17:12:48.055228] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:46.822 [2024-07-22 17:12:48.055310] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:46.822 [2024-07-22 17:12:48.061115] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:46.822 [2024-07-22 17:12:48.132790] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:46.822 [2024-07-22 17:12:48.132854] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.822 request: 00:34:46.822 { 00:34:46.822 "name": "nvme", 00:34:46.822 "trtype": "tcp", 00:34:46.822 "traddr": "10.0.0.2", 00:34:46.822 "adrfam": "ipv4", 00:34:46.822 "trsvcid": "8009", 00:34:46.822 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:46.822 "wait_for_attach": true, 00:34:46.822 "method": "bdev_nvme_start_discovery", 00:34:46.822 "req_id": 1 00:34:46.822 } 00:34:46.822 Got JSON-RPC error response 00:34:46.822 response: 00:34:46.822 { 00:34:46.822 "code": -17, 00:34:46.822 "message": "File exists" 00:34:46.822 } 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:46.822 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.823 request: 00:34:46.823 { 00:34:46.823 "name": "nvme_second", 00:34:46.823 "trtype": "tcp", 00:34:46.823 "traddr": "10.0.0.2", 00:34:46.823 "adrfam": "ipv4", 00:34:46.823 "trsvcid": "8009", 00:34:46.823 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:46.823 "wait_for_attach": true, 00:34:46.823 "method": "bdev_nvme_start_discovery", 00:34:46.823 "req_id": 1 00:34:46.823 } 00:34:46.823 Got JSON-RPC error response 00:34:46.823 response: 00:34:46.823 { 00:34:46.823 "code": -17, 00:34:46.823 "message": "File exists" 00:34:46.823 } 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.823 17:12:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.198 [2024-07-22 17:12:49.397506] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.198 [2024-07-22 17:12:49.397585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:34:48.198 [2024-07-22 17:12:49.397665] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:48.198 [2024-07-22 17:12:49.397678] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:48.198 [2024-07-22 17:12:49.397692] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:49.134 [2024-07-22 17:12:50.397551] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.134 [2024-07-22 17:12:50.397625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:34:49.134 [2024-07-22 17:12:50.397696] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:49.134 [2024-07-22 17:12:50.397709] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:49.134 [2024-07-22 17:12:50.397722] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:50.124 [2024-07-22 17:12:51.397279] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:50.124 request: 00:34:50.124 { 00:34:50.124 "name": "nvme_second", 00:34:50.124 "trtype": "tcp", 00:34:50.124 "traddr": "10.0.0.2", 00:34:50.124 "adrfam": "ipv4", 00:34:50.124 "trsvcid": "8010", 00:34:50.124 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:50.124 "wait_for_attach": false, 00:34:50.124 "attach_timeout_ms": 3000, 00:34:50.124 "method": "bdev_nvme_start_discovery", 00:34:50.124 "req_id": 1 00:34:50.124 } 00:34:50.124 Got JSON-RPC error response 00:34:50.124 response: 00:34:50.124 { 00:34:50.124 "code": -110, 00:34:50.124 "message": "Connection timed out" 00:34:50.124 } 00:34:50.124 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:50.124 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:50.124 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:50.124 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:50.124 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 84330 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:50.125 rmmod nvme_tcp 00:34:50.125 rmmod nvme_fabrics 00:34:50.125 rmmod nvme_keyring 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 84298 ']' 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 84298 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 84298 ']' 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 84298 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84298 00:34:50.125 killing process with pid 84298 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84298' 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 84298 00:34:50.125 17:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 84298 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:51.500 00:34:51.500 real 0m11.378s 00:34:51.500 user 0m21.241s 00:34:51.500 sys 0m2.612s 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.500 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.500 ************************************ 00:34:51.500 END TEST nvmf_host_discovery 00:34:51.500 ************************************ 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.759 ************************************ 00:34:51.759 START TEST nvmf_host_multipath_status 00:34:51.759 ************************************ 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:51.759 * Looking for test storage... 00:34:51.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.759 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:51.760 Cannot find device "nvmf_tgt_br" 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:51.760 Cannot find device "nvmf_tgt_br2" 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:51.760 Cannot find device "nvmf_tgt_br" 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:51.760 Cannot find device "nvmf_tgt_br2" 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:34:51.760 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:52.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:52.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:52.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:34:52.018 00:34:52.018 --- 10.0.0.2 ping statistics --- 00:34:52.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.018 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:52.018 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:52.018 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:34:52.018 00:34:52.018 --- 10.0.0.3 ping statistics --- 00:34:52.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.018 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:52.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:34:52.018 00:34:52.018 --- 10.0.0.1 ping statistics --- 00:34:52.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.018 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:52.018 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.019 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:52.019 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:52.019 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.019 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:52.019 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=84801 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 84801 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 84801 ']' 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:52.276 17:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:52.276 [2024-07-22 17:12:53.799585] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:52.276 [2024-07-22 17:12:53.799758] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.534 [2024-07-22 17:12:53.991184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:52.792 [2024-07-22 17:12:54.309957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.792 [2024-07-22 17:12:54.310013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.792 [2024-07-22 17:12:54.310027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.792 [2024-07-22 17:12:54.310041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.792 [2024-07-22 17:12:54.310059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.792 [2024-07-22 17:12:54.310269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.792 [2024-07-22 17:12:54.310313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.050 [2024-07-22 17:12:54.566786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:34:53.308 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:53.308 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:53.308 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:53.308 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:53.308 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:53.308 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.308 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=84801 00:34:53.309 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:53.567 [2024-07-22 17:12:54.944626] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.567 17:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:53.826 Malloc0 00:34:53.826 17:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:54.085 17:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:54.344 17:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:54.344 [2024-07-22 17:12:55.947355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.603 17:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:54.603 [2024-07-22 17:12:56.139441] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=84851 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 84851 /var/tmp/bdevperf.sock 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 84851 ']' 00:34:54.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:54.603 17:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:55.990 17:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:55.990 17:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:55.990 17:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:55.990 17:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:56.248 Nvme0n1 00:34:56.248 17:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:56.506 Nvme0n1 00:34:56.506 17:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:56.506 17:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:58.465 17:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:58.465 17:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:58.724 17:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:58.982 17:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:59.916 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:59.916 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:59.916 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.916 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.174 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.174 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:00.174 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.174 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.432 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.432 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.432 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.432 17:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.690 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.690 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.690 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.690 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.948 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.948 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.948 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.948 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:01.207 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.207 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:01.207 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.207 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:01.465 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.465 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:01.465 17:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:01.724 17:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:01.982 17:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:03.356 17:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.614 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.614 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:03.614 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.614 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:03.873 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.873 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:03.873 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.873 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:04.131 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.131 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:04.131 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.131 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:04.408 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.408 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:04.408 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:04.408 17:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.666 17:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.666 17:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:04.666 17:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:04.924 17:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:05.182 17:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:06.118 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:06.118 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:06.118 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.118 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:06.391 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.391 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:06.391 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.391 17:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:06.650 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.650 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:06.650 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.650 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:06.908 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.908 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:06.908 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.908 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:07.474 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.474 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:07.474 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.474 17:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:07.475 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.475 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:07.475 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.475 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:07.733 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.733 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:07.733 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:08.303 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:08.303 17:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:09.678 17:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:09.678 17:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:09.678 17:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.678 17:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:09.678 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.678 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:09.678 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.678 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:09.936 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:09.936 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:09.936 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.936 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:10.195 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.195 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:10.195 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:10.195 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.454 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.454 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:10.454 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.454 17:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:10.712 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.712 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:10.712 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.712 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:10.971 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:10.971 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:10.971 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:11.538 17:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:11.538 17:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.952 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:13.211 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.211 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:13.211 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.211 17:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:13.469 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.469 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:13.469 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.469 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:13.727 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.727 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:13.727 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.728 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:13.986 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.986 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:13.986 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.986 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:14.554 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:14.554 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:14.554 17:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:14.812 17:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:15.070 17:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:16.037 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:16.037 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:16.037 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:16.037 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.296 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.296 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:16.296 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:16.296 17:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.585 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.585 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:16.585 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:16.585 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.843 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.843 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:16.843 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.844 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:17.101 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.101 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:17.102 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.102 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:17.668 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.668 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:17.668 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.668 17:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:17.668 17:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.668 17:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:18.235 17:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:18.235 17:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:18.235 17:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:18.494 17:13:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:19.429 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:19.429 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:19.429 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.429 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:19.689 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.689 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:19.689 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.689 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:19.949 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.949 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:19.949 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.949 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:20.208 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.208 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:20.208 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.208 17:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:20.466 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.466 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:20.466 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.466 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:20.724 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.724 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:20.724 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.724 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:21.288 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.288 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:21.288 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:21.288 17:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:21.544 17:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:22.515 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:22.515 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:22.515 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.515 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:22.773 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:22.773 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:22.773 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:22.773 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.030 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.030 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:23.030 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.030 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:23.289 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.289 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:23.289 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.289 17:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:23.548 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.548 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:23.548 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:23.548 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.806 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.806 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:23.806 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:23.806 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.064 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.064 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:24.064 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:24.321 17:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:24.579 17:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:25.970 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.230 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.230 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:26.230 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.230 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:26.489 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.489 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:26.489 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.489 17:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:26.747 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.747 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:26.747 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.747 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:27.005 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.005 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:27.005 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.005 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:27.264 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.264 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:27.264 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:27.523 17:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:27.782 17:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:28.719 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:28.719 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:28.719 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.719 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:28.977 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.977 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:28.977 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.977 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:29.236 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:29.236 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:29.236 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.236 17:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:29.494 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.494 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:29.494 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.494 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:29.752 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.752 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:29.752 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.752 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:30.010 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.010 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:30.010 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.010 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 84851 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 84851 ']' 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 84851 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84851 00:35:30.269 killing process with pid 84851 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84851' 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 84851 00:35:30.269 17:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 84851 00:35:31.204 Connection closed with partial response: 00:35:31.204 00:35:31.204 00:35:31.776 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 84851 00:35:31.776 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:35:31.776 [2024-07-22 17:12:56.281785] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:31.776 [2024-07-22 17:12:56.281979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84851 ] 00:35:31.776 [2024-07-22 17:12:56.467315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.776 [2024-07-22 17:12:56.756652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:31.776 [2024-07-22 17:12:57.009711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:35:31.776 Running I/O for 90 seconds... 00:35:31.776 [2024-07-22 17:13:12.832893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.776 [2024-07-22 17:13:12.833436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.776 [2024-07-22 17:13:12.833483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.776 [2024-07-22 17:13:12.833530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.776 [2024-07-22 17:13:12.833599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.776 [2024-07-22 17:13:12.833648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.776 [2024-07-22 17:13:12.833695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:31.776 [2024-07-22 17:13:12.833723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.776 [2024-07-22 17:13:12.833742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.833770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.833789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.833817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.833836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.833865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.833884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.833913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.833931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.833961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.833981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.834028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.834076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.834123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.834179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.834228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.834978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.834997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.835045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.777 [2024-07-22 17:13:12.835459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.835507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.835555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.835604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:31.777 [2024-07-22 17:13:12.835634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.777 [2024-07-22 17:13:12.835653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.835682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.835701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.835730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.835749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.835778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.835797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.835825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.835859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.835893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.835913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.835942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.835961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.835990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.778 [2024-07-22 17:13:12.836701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.836963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.836991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.778 [2024-07-22 17:13:12.837615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:31.778 [2024-07-22 17:13:12.837643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.837662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.837691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.837710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.837738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.837758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.837787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.837806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.837835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.837854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.837883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.837902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.837938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.837958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.837989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.838479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.839500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.839609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.839666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.839722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.839777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.839846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.839903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.839960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.840034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.840147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.840202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:12.840273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:12.840705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.779 [2024-07-22 17:13:12.840724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:31.779 [2024-07-22 17:13:29.164846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.779 [2024-07-22 17:13:29.164954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.165541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.165955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.165974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.780 [2024-07-22 17:13:29.166018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.780 [2024-07-22 17:13:29.166758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:31.780 [2024-07-22 17:13:29.166785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.166804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.166831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.166850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.166878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.166896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.166924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.166942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.166969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.166994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.167041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.167087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.167134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.167181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.167227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.167286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.167334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.167381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.167428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.167476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.167523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.167578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.167607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.167627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.169184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.169243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.169318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.169364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.169410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.169454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.781 [2024-07-22 17:13:29.169500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.169546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.169592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.169637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.169682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.169758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:31.781 [2024-07-22 17:13:29.169787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.781 [2024-07-22 17:13:29.169806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:31.781 Received shutdown signal, test time was about 33.801281 seconds 00:35:31.781 00:35:31.781 Latency(us) 00:35:31.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.782 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:31.782 Verification LBA range: start 0x0 length 0x4000 00:35:31.782 Nvme0n1 : 33.80 8118.09 31.71 0.00 0.00 15739.08 873.81 4026531.84 00:35:31.782 =================================================================================================================== 00:35:31.782 Total : 8118.09 31.71 0.00 0.00 15739.08 873.81 4026531.84 00:35:31.782 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:32.059 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:32.059 rmmod nvme_tcp 00:35:32.318 rmmod nvme_fabrics 00:35:32.318 rmmod nvme_keyring 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 84801 ']' 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 84801 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 84801 ']' 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 84801 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84801 00:35:32.318 killing process with pid 84801 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84801' 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 84801 00:35:32.318 17:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 84801 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:34.224 00:35:34.224 real 0m42.323s 00:35:34.224 user 2m11.383s 00:35:34.224 sys 0m13.861s 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:34.224 ************************************ 00:35:34.224 END TEST nvmf_host_multipath_status 00:35:34.224 ************************************ 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.224 ************************************ 00:35:34.224 START TEST nvmf_discovery_remove_ifc 00:35:34.224 ************************************ 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:34.224 * Looking for test storage... 00:35:34.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.224 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:34.225 Cannot find device "nvmf_tgt_br" 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:34.225 Cannot find device "nvmf_tgt_br2" 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:34.225 Cannot find device "nvmf_tgt_br" 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:34.225 Cannot find device "nvmf_tgt_br2" 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:34.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:34.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:34.225 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:34.483 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:34.483 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:34.483 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:34.483 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:34.483 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:34.483 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:34.483 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:34.484 17:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:34.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:35:34.484 00:35:34.484 --- 10.0.0.2 ping statistics --- 00:35:34.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.484 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:34.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:34.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:35:34.484 00:35:34.484 --- 10.0.0.3 ping statistics --- 00:35:34.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.484 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:34.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:35:34.484 00:35:34.484 --- 10.0.0.1 ping statistics --- 00:35:34.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.484 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=85661 00:35:34.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 85661 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 85661 ']' 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:34.484 17:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.742 [2024-07-22 17:13:36.143765] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:34.742 [2024-07-22 17:13:36.144095] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.742 [2024-07-22 17:13:36.312561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.999 [2024-07-22 17:13:36.571855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.999 [2024-07-22 17:13:36.572133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.999 [2024-07-22 17:13:36.572299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.999 [2024-07-22 17:13:36.572411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.999 [2024-07-22 17:13:36.572453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.999 [2024-07-22 17:13:36.572581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.256 [2024-07-22 17:13:36.827436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:35:35.516 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.775 [2024-07-22 17:13:37.197862] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.775 [2024-07-22 17:13:37.206017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:35.775 null0 00:35:35.775 [2024-07-22 17:13:37.237962] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=85693 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85693 /tmp/host.sock 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 85693 ']' 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:35.775 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:35.775 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:35.776 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:35.776 17:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.776 [2024-07-22 17:13:37.392101] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:35.776 [2024-07-22 17:13:37.392293] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85693 ] 00:35:36.034 [2024-07-22 17:13:37.575936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.337 [2024-07-22 17:13:37.820339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.925 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:36.925 [2024-07-22 17:13:38.508680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:35:37.184 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.184 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:37.184 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.184 17:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.118 [2024-07-22 17:13:39.672143] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:38.118 [2024-07-22 17:13:39.672191] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:38.118 [2024-07-22 17:13:39.672246] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:38.118 [2024-07-22 17:13:39.678211] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:38.419 [2024-07-22 17:13:39.745598] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:38.419 [2024-07-22 17:13:39.745714] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:38.419 [2024-07-22 17:13:39.745791] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:38.419 [2024-07-22 17:13:39.745817] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:38.419 [2024-07-22 17:13:39.745858] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:38.419 [2024-07-22 17:13:39.751655] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:38.419 17:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:39.353 17:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:40.728 17:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:41.663 17:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:41.663 17:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:41.663 17:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.663 17:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:41.663 17:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.663 17:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:41.663 17:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:41.663 17:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.663 17:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:41.663 17:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:42.596 17:13:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:43.545 17:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:43.803 [2024-07-22 17:13:45.172497] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:43.803 [2024-07-22 17:13:45.172599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.803 [2024-07-22 17:13:45.172621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.803 [2024-07-22 17:13:45.172641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.803 [2024-07-22 17:13:45.172656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.803 [2024-07-22 17:13:45.172671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.803 [2024-07-22 17:13:45.172685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.804 [2024-07-22 17:13:45.172701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.804 [2024-07-22 17:13:45.172714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.804 [2024-07-22 17:13:45.172729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.804 [2024-07-22 17:13:45.172743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.804 [2024-07-22 17:13:45.172756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:35:43.804 [2024-07-22 17:13:45.182477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:35:43.804 [2024-07-22 17:13:45.192504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:44.741 [2024-07-22 17:13:46.213295] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:35:44.741 [2024-07-22 17:13:46.213394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:35:44.741 [2024-07-22 17:13:46.213421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:35:44.741 [2024-07-22 17:13:46.213482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:35:44.741 [2024-07-22 17:13:46.214095] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:44.741 [2024-07-22 17:13:46.214147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:44.741 [2024-07-22 17:13:46.214163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:44.741 [2024-07-22 17:13:46.214187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:44.741 [2024-07-22 17:13:46.214224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.741 [2024-07-22 17:13:46.214241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:44.741 17:13:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:45.675 [2024-07-22 17:13:47.214355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.675 [2024-07-22 17:13:47.214435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.675 [2024-07-22 17:13:47.214450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.675 [2024-07-22 17:13:47.214466] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:45.675 [2024-07-22 17:13:47.214498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.675 [2024-07-22 17:13:47.214549] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:45.675 [2024-07-22 17:13:47.214618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.675 [2024-07-22 17:13:47.214650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.675 [2024-07-22 17:13:47.214670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.675 [2024-07-22 17:13:47.214684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.675 [2024-07-22 17:13:47.214698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.675 [2024-07-22 17:13:47.214711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.675 [2024-07-22 17:13:47.214725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.675 [2024-07-22 17:13:47.214738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.675 [2024-07-22 17:13:47.214751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.675 [2024-07-22 17:13:47.214765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.675 [2024-07-22 17:13:47.214778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:45.675 [2024-07-22 17:13:47.215038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:35:45.675 [2024-07-22 17:13:47.216052] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:45.675 [2024-07-22 17:13:47.216089] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:45.675 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:45.935 17:13:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:46.871 17:13:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:47.806 [2024-07-22 17:13:49.220583] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:47.806 [2024-07-22 17:13:49.220633] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:47.806 [2024-07-22 17:13:49.220676] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:47.806 [2024-07-22 17:13:49.226680] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:47.806 [2024-07-22 17:13:49.292361] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:47.806 [2024-07-22 17:13:49.292452] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:47.806 [2024-07-22 17:13:49.292526] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:47.806 [2024-07-22 17:13:49.292549] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:47.806 [2024-07-22 17:13:49.292564] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:47.806 [2024-07-22 17:13:49.299369] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 85693 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 85693 ']' 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 85693 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85693 00:35:48.065 killing process with pid 85693 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85693' 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 85693 00:35:48.065 17:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 85693 00:35:49.488 17:13:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:49.488 17:13:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:49.488 17:13:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:49.488 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:49.488 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:49.488 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:49.488 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:49.488 rmmod nvme_tcp 00:35:49.488 rmmod nvme_fabrics 00:35:49.488 rmmod nvme_keyring 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 85661 ']' 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 85661 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 85661 ']' 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 85661 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85661 00:35:49.747 killing process with pid 85661 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85661' 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 85661 00:35:49.747 17:13:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 85661 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:51.123 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:51.124 00:35:51.124 real 0m17.135s 00:35:51.124 user 0m28.151s 00:35:51.124 sys 0m3.428s 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.124 ************************************ 00:35:51.124 END TEST nvmf_discovery_remove_ifc 00:35:51.124 ************************************ 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.124 ************************************ 00:35:51.124 START TEST nvmf_identify_kernel_target 00:35:51.124 ************************************ 00:35:51.124 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:51.385 * Looking for test storage... 00:35:51.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:51.385 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:51.386 Cannot find device "nvmf_tgt_br" 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:51.386 Cannot find device "nvmf_tgt_br2" 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:51.386 Cannot find device "nvmf_tgt_br" 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:51.386 Cannot find device "nvmf_tgt_br2" 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:35:51.386 17:13:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:51.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:51.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:51.645 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:51.646 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:51.646 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:51.646 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:51.646 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:51.646 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:51.646 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:51.646 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:51.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:35:51.903 00:35:51.903 --- 10.0.0.2 ping statistics --- 00:35:51.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.903 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:51.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:51.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:35:51.903 00:35:51.903 --- 10.0.0.3 ping statistics --- 00:35:51.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.903 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:51.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:35:51.903 00:35:51.903 --- 10.0.0.1 ping statistics --- 00:35:51.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.903 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:51.903 17:13:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:52.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:52.160 Waiting for block devices as requested 00:35:52.418 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:52.418 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:52.418 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:52.677 No valid GPT data, bailing 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:35:52.677 No valid GPT data, bailing 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:35:52.677 No valid GPT data, bailing 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:35:52.677 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:52.936 No valid GPT data, bailing 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:52.936 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:52.937 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:52.937 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -a 10.0.0.1 -t tcp -s 4420 00:35:52.937 00:35:52.937 Discovery Log Number of Records 2, Generation counter 2 00:35:52.937 =====Discovery Log Entry 0====== 00:35:52.937 trtype: tcp 00:35:52.937 adrfam: ipv4 00:35:52.937 subtype: current discovery subsystem 00:35:52.937 treq: not specified, sq flow control disable supported 00:35:52.937 portid: 1 00:35:52.937 trsvcid: 4420 00:35:52.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:52.937 traddr: 10.0.0.1 00:35:52.937 eflags: none 00:35:52.937 sectype: none 00:35:52.937 =====Discovery Log Entry 1====== 00:35:52.937 trtype: tcp 00:35:52.937 adrfam: ipv4 00:35:52.937 subtype: nvme subsystem 00:35:52.937 treq: not specified, sq flow control disable supported 00:35:52.937 portid: 1 00:35:52.937 trsvcid: 4420 00:35:52.937 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:52.937 traddr: 10.0.0.1 00:35:52.937 eflags: none 00:35:52.937 sectype: none 00:35:52.937 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:52.937 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:53.196 ===================================================== 00:35:53.196 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:53.196 ===================================================== 00:35:53.196 Controller Capabilities/Features 00:35:53.196 ================================ 00:35:53.196 Vendor ID: 0000 00:35:53.196 Subsystem Vendor ID: 0000 00:35:53.196 Serial Number: 4907f203ca1919275aaf 00:35:53.196 Model Number: Linux 00:35:53.196 Firmware Version: 6.7.0-68 00:35:53.196 Recommended Arb Burst: 0 00:35:53.196 IEEE OUI Identifier: 00 00 00 00:35:53.196 Multi-path I/O 00:35:53.196 May have multiple subsystem ports: No 00:35:53.196 May have multiple controllers: No 00:35:53.196 Associated with SR-IOV VF: No 00:35:53.196 Max Data Transfer Size: Unlimited 00:35:53.196 Max Number of Namespaces: 0 00:35:53.196 Max Number of I/O Queues: 1024 00:35:53.196 NVMe Specification Version (VS): 1.3 00:35:53.196 NVMe Specification Version (Identify): 1.3 00:35:53.196 Maximum Queue Entries: 1024 00:35:53.196 Contiguous Queues Required: No 00:35:53.196 Arbitration Mechanisms Supported 00:35:53.196 Weighted Round Robin: Not Supported 00:35:53.196 Vendor Specific: Not Supported 00:35:53.196 Reset Timeout: 7500 ms 00:35:53.196 Doorbell Stride: 4 bytes 00:35:53.196 NVM Subsystem Reset: Not Supported 00:35:53.196 Command Sets Supported 00:35:53.196 NVM Command Set: Supported 00:35:53.196 Boot Partition: Not Supported 00:35:53.196 Memory Page Size Minimum: 4096 bytes 00:35:53.196 Memory Page Size Maximum: 4096 bytes 00:35:53.196 Persistent Memory Region: Not Supported 00:35:53.196 Optional Asynchronous Events Supported 00:35:53.196 Namespace Attribute Notices: Not Supported 00:35:53.196 Firmware Activation Notices: Not Supported 00:35:53.196 ANA Change Notices: Not Supported 00:35:53.196 PLE Aggregate Log Change Notices: Not Supported 00:35:53.196 LBA Status Info Alert Notices: Not Supported 00:35:53.196 EGE Aggregate Log Change Notices: Not Supported 00:35:53.196 Normal NVM Subsystem Shutdown event: Not Supported 00:35:53.196 Zone Descriptor Change Notices: Not Supported 00:35:53.196 Discovery Log Change Notices: Supported 00:35:53.196 Controller Attributes 00:35:53.196 128-bit Host Identifier: Not Supported 00:35:53.196 Non-Operational Permissive Mode: Not Supported 00:35:53.196 NVM Sets: Not Supported 00:35:53.196 Read Recovery Levels: Not Supported 00:35:53.196 Endurance Groups: Not Supported 00:35:53.196 Predictable Latency Mode: Not Supported 00:35:53.196 Traffic Based Keep ALive: Not Supported 00:35:53.196 Namespace Granularity: Not Supported 00:35:53.196 SQ Associations: Not Supported 00:35:53.196 UUID List: Not Supported 00:35:53.196 Multi-Domain Subsystem: Not Supported 00:35:53.196 Fixed Capacity Management: Not Supported 00:35:53.196 Variable Capacity Management: Not Supported 00:35:53.196 Delete Endurance Group: Not Supported 00:35:53.196 Delete NVM Set: Not Supported 00:35:53.196 Extended LBA Formats Supported: Not Supported 00:35:53.196 Flexible Data Placement Supported: Not Supported 00:35:53.196 00:35:53.196 Controller Memory Buffer Support 00:35:53.196 ================================ 00:35:53.196 Supported: No 00:35:53.196 00:35:53.196 Persistent Memory Region Support 00:35:53.196 ================================ 00:35:53.196 Supported: No 00:35:53.196 00:35:53.196 Admin Command Set Attributes 00:35:53.196 ============================ 00:35:53.197 Security Send/Receive: Not Supported 00:35:53.197 Format NVM: Not Supported 00:35:53.197 Firmware Activate/Download: Not Supported 00:35:53.197 Namespace Management: Not Supported 00:35:53.197 Device Self-Test: Not Supported 00:35:53.197 Directives: Not Supported 00:35:53.197 NVMe-MI: Not Supported 00:35:53.197 Virtualization Management: Not Supported 00:35:53.197 Doorbell Buffer Config: Not Supported 00:35:53.197 Get LBA Status Capability: Not Supported 00:35:53.197 Command & Feature Lockdown Capability: Not Supported 00:35:53.197 Abort Command Limit: 1 00:35:53.197 Async Event Request Limit: 1 00:35:53.197 Number of Firmware Slots: N/A 00:35:53.197 Firmware Slot 1 Read-Only: N/A 00:35:53.197 Firmware Activation Without Reset: N/A 00:35:53.197 Multiple Update Detection Support: N/A 00:35:53.197 Firmware Update Granularity: No Information Provided 00:35:53.197 Per-Namespace SMART Log: No 00:35:53.197 Asymmetric Namespace Access Log Page: Not Supported 00:35:53.197 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:53.197 Command Effects Log Page: Not Supported 00:35:53.197 Get Log Page Extended Data: Supported 00:35:53.197 Telemetry Log Pages: Not Supported 00:35:53.197 Persistent Event Log Pages: Not Supported 00:35:53.197 Supported Log Pages Log Page: May Support 00:35:53.197 Commands Supported & Effects Log Page: Not Supported 00:35:53.197 Feature Identifiers & Effects Log Page:May Support 00:35:53.197 NVMe-MI Commands & Effects Log Page: May Support 00:35:53.197 Data Area 4 for Telemetry Log: Not Supported 00:35:53.197 Error Log Page Entries Supported: 1 00:35:53.197 Keep Alive: Not Supported 00:35:53.197 00:35:53.197 NVM Command Set Attributes 00:35:53.197 ========================== 00:35:53.197 Submission Queue Entry Size 00:35:53.197 Max: 1 00:35:53.197 Min: 1 00:35:53.197 Completion Queue Entry Size 00:35:53.197 Max: 1 00:35:53.197 Min: 1 00:35:53.197 Number of Namespaces: 0 00:35:53.197 Compare Command: Not Supported 00:35:53.197 Write Uncorrectable Command: Not Supported 00:35:53.197 Dataset Management Command: Not Supported 00:35:53.197 Write Zeroes Command: Not Supported 00:35:53.197 Set Features Save Field: Not Supported 00:35:53.197 Reservations: Not Supported 00:35:53.197 Timestamp: Not Supported 00:35:53.197 Copy: Not Supported 00:35:53.197 Volatile Write Cache: Not Present 00:35:53.197 Atomic Write Unit (Normal): 1 00:35:53.197 Atomic Write Unit (PFail): 1 00:35:53.197 Atomic Compare & Write Unit: 1 00:35:53.197 Fused Compare & Write: Not Supported 00:35:53.197 Scatter-Gather List 00:35:53.197 SGL Command Set: Supported 00:35:53.197 SGL Keyed: Not Supported 00:35:53.197 SGL Bit Bucket Descriptor: Not Supported 00:35:53.197 SGL Metadata Pointer: Not Supported 00:35:53.197 Oversized SGL: Not Supported 00:35:53.197 SGL Metadata Address: Not Supported 00:35:53.197 SGL Offset: Supported 00:35:53.197 Transport SGL Data Block: Not Supported 00:35:53.197 Replay Protected Memory Block: Not Supported 00:35:53.197 00:35:53.197 Firmware Slot Information 00:35:53.197 ========================= 00:35:53.197 Active slot: 0 00:35:53.197 00:35:53.197 00:35:53.197 Error Log 00:35:53.197 ========= 00:35:53.197 00:35:53.197 Active Namespaces 00:35:53.197 ================= 00:35:53.197 Discovery Log Page 00:35:53.197 ================== 00:35:53.197 Generation Counter: 2 00:35:53.197 Number of Records: 2 00:35:53.197 Record Format: 0 00:35:53.197 00:35:53.197 Discovery Log Entry 0 00:35:53.197 ---------------------- 00:35:53.197 Transport Type: 3 (TCP) 00:35:53.197 Address Family: 1 (IPv4) 00:35:53.197 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:53.197 Entry Flags: 00:35:53.197 Duplicate Returned Information: 0 00:35:53.197 Explicit Persistent Connection Support for Discovery: 0 00:35:53.197 Transport Requirements: 00:35:53.197 Secure Channel: Not Specified 00:35:53.197 Port ID: 1 (0x0001) 00:35:53.197 Controller ID: 65535 (0xffff) 00:35:53.197 Admin Max SQ Size: 32 00:35:53.197 Transport Service Identifier: 4420 00:35:53.197 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:53.197 Transport Address: 10.0.0.1 00:35:53.197 Discovery Log Entry 1 00:35:53.197 ---------------------- 00:35:53.197 Transport Type: 3 (TCP) 00:35:53.197 Address Family: 1 (IPv4) 00:35:53.197 Subsystem Type: 2 (NVM Subsystem) 00:35:53.197 Entry Flags: 00:35:53.197 Duplicate Returned Information: 0 00:35:53.197 Explicit Persistent Connection Support for Discovery: 0 00:35:53.197 Transport Requirements: 00:35:53.197 Secure Channel: Not Specified 00:35:53.197 Port ID: 1 (0x0001) 00:35:53.197 Controller ID: 65535 (0xffff) 00:35:53.197 Admin Max SQ Size: 32 00:35:53.197 Transport Service Identifier: 4420 00:35:53.197 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:53.197 Transport Address: 10.0.0.1 00:35:53.197 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.456 get_feature(0x01) failed 00:35:53.456 get_feature(0x02) failed 00:35:53.456 get_feature(0x04) failed 00:35:53.456 ===================================================== 00:35:53.456 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:53.456 ===================================================== 00:35:53.456 Controller Capabilities/Features 00:35:53.456 ================================ 00:35:53.456 Vendor ID: 0000 00:35:53.456 Subsystem Vendor ID: 0000 00:35:53.456 Serial Number: 5e14fe81fdf7af4874fd 00:35:53.456 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:53.456 Firmware Version: 6.7.0-68 00:35:53.456 Recommended Arb Burst: 6 00:35:53.456 IEEE OUI Identifier: 00 00 00 00:35:53.456 Multi-path I/O 00:35:53.456 May have multiple subsystem ports: Yes 00:35:53.456 May have multiple controllers: Yes 00:35:53.456 Associated with SR-IOV VF: No 00:35:53.456 Max Data Transfer Size: Unlimited 00:35:53.456 Max Number of Namespaces: 1024 00:35:53.456 Max Number of I/O Queues: 128 00:35:53.456 NVMe Specification Version (VS): 1.3 00:35:53.456 NVMe Specification Version (Identify): 1.3 00:35:53.456 Maximum Queue Entries: 1024 00:35:53.456 Contiguous Queues Required: No 00:35:53.456 Arbitration Mechanisms Supported 00:35:53.457 Weighted Round Robin: Not Supported 00:35:53.457 Vendor Specific: Not Supported 00:35:53.457 Reset Timeout: 7500 ms 00:35:53.457 Doorbell Stride: 4 bytes 00:35:53.457 NVM Subsystem Reset: Not Supported 00:35:53.457 Command Sets Supported 00:35:53.457 NVM Command Set: Supported 00:35:53.457 Boot Partition: Not Supported 00:35:53.457 Memory Page Size Minimum: 4096 bytes 00:35:53.457 Memory Page Size Maximum: 4096 bytes 00:35:53.457 Persistent Memory Region: Not Supported 00:35:53.457 Optional Asynchronous Events Supported 00:35:53.457 Namespace Attribute Notices: Supported 00:35:53.457 Firmware Activation Notices: Not Supported 00:35:53.457 ANA Change Notices: Supported 00:35:53.457 PLE Aggregate Log Change Notices: Not Supported 00:35:53.457 LBA Status Info Alert Notices: Not Supported 00:35:53.457 EGE Aggregate Log Change Notices: Not Supported 00:35:53.457 Normal NVM Subsystem Shutdown event: Not Supported 00:35:53.457 Zone Descriptor Change Notices: Not Supported 00:35:53.457 Discovery Log Change Notices: Not Supported 00:35:53.457 Controller Attributes 00:35:53.457 128-bit Host Identifier: Supported 00:35:53.457 Non-Operational Permissive Mode: Not Supported 00:35:53.457 NVM Sets: Not Supported 00:35:53.457 Read Recovery Levels: Not Supported 00:35:53.457 Endurance Groups: Not Supported 00:35:53.457 Predictable Latency Mode: Not Supported 00:35:53.457 Traffic Based Keep ALive: Supported 00:35:53.457 Namespace Granularity: Not Supported 00:35:53.457 SQ Associations: Not Supported 00:35:53.457 UUID List: Not Supported 00:35:53.457 Multi-Domain Subsystem: Not Supported 00:35:53.457 Fixed Capacity Management: Not Supported 00:35:53.457 Variable Capacity Management: Not Supported 00:35:53.457 Delete Endurance Group: Not Supported 00:35:53.457 Delete NVM Set: Not Supported 00:35:53.457 Extended LBA Formats Supported: Not Supported 00:35:53.457 Flexible Data Placement Supported: Not Supported 00:35:53.457 00:35:53.457 Controller Memory Buffer Support 00:35:53.457 ================================ 00:35:53.457 Supported: No 00:35:53.457 00:35:53.457 Persistent Memory Region Support 00:35:53.457 ================================ 00:35:53.457 Supported: No 00:35:53.457 00:35:53.457 Admin Command Set Attributes 00:35:53.457 ============================ 00:35:53.457 Security Send/Receive: Not Supported 00:35:53.457 Format NVM: Not Supported 00:35:53.457 Firmware Activate/Download: Not Supported 00:35:53.457 Namespace Management: Not Supported 00:35:53.457 Device Self-Test: Not Supported 00:35:53.457 Directives: Not Supported 00:35:53.457 NVMe-MI: Not Supported 00:35:53.457 Virtualization Management: Not Supported 00:35:53.457 Doorbell Buffer Config: Not Supported 00:35:53.457 Get LBA Status Capability: Not Supported 00:35:53.457 Command & Feature Lockdown Capability: Not Supported 00:35:53.457 Abort Command Limit: 4 00:35:53.457 Async Event Request Limit: 4 00:35:53.457 Number of Firmware Slots: N/A 00:35:53.457 Firmware Slot 1 Read-Only: N/A 00:35:53.457 Firmware Activation Without Reset: N/A 00:35:53.457 Multiple Update Detection Support: N/A 00:35:53.457 Firmware Update Granularity: No Information Provided 00:35:53.457 Per-Namespace SMART Log: Yes 00:35:53.457 Asymmetric Namespace Access Log Page: Supported 00:35:53.457 ANA Transition Time : 10 sec 00:35:53.457 00:35:53.457 Asymmetric Namespace Access Capabilities 00:35:53.457 ANA Optimized State : Supported 00:35:53.457 ANA Non-Optimized State : Supported 00:35:53.457 ANA Inaccessible State : Supported 00:35:53.457 ANA Persistent Loss State : Supported 00:35:53.457 ANA Change State : Supported 00:35:53.457 ANAGRPID is not changed : No 00:35:53.457 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:53.457 00:35:53.457 ANA Group Identifier Maximum : 128 00:35:53.457 Number of ANA Group Identifiers : 128 00:35:53.457 Max Number of Allowed Namespaces : 1024 00:35:53.457 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:53.457 Command Effects Log Page: Supported 00:35:53.457 Get Log Page Extended Data: Supported 00:35:53.457 Telemetry Log Pages: Not Supported 00:35:53.457 Persistent Event Log Pages: Not Supported 00:35:53.457 Supported Log Pages Log Page: May Support 00:35:53.457 Commands Supported & Effects Log Page: Not Supported 00:35:53.457 Feature Identifiers & Effects Log Page:May Support 00:35:53.457 NVMe-MI Commands & Effects Log Page: May Support 00:35:53.457 Data Area 4 for Telemetry Log: Not Supported 00:35:53.457 Error Log Page Entries Supported: 128 00:35:53.457 Keep Alive: Supported 00:35:53.457 Keep Alive Granularity: 1000 ms 00:35:53.457 00:35:53.457 NVM Command Set Attributes 00:35:53.457 ========================== 00:35:53.457 Submission Queue Entry Size 00:35:53.457 Max: 64 00:35:53.457 Min: 64 00:35:53.457 Completion Queue Entry Size 00:35:53.457 Max: 16 00:35:53.457 Min: 16 00:35:53.457 Number of Namespaces: 1024 00:35:53.457 Compare Command: Not Supported 00:35:53.457 Write Uncorrectable Command: Not Supported 00:35:53.457 Dataset Management Command: Supported 00:35:53.457 Write Zeroes Command: Supported 00:35:53.457 Set Features Save Field: Not Supported 00:35:53.457 Reservations: Not Supported 00:35:53.457 Timestamp: Not Supported 00:35:53.457 Copy: Not Supported 00:35:53.457 Volatile Write Cache: Present 00:35:53.457 Atomic Write Unit (Normal): 1 00:35:53.457 Atomic Write Unit (PFail): 1 00:35:53.457 Atomic Compare & Write Unit: 1 00:35:53.457 Fused Compare & Write: Not Supported 00:35:53.457 Scatter-Gather List 00:35:53.457 SGL Command Set: Supported 00:35:53.457 SGL Keyed: Not Supported 00:35:53.457 SGL Bit Bucket Descriptor: Not Supported 00:35:53.457 SGL Metadata Pointer: Not Supported 00:35:53.457 Oversized SGL: Not Supported 00:35:53.457 SGL Metadata Address: Not Supported 00:35:53.457 SGL Offset: Supported 00:35:53.457 Transport SGL Data Block: Not Supported 00:35:53.457 Replay Protected Memory Block: Not Supported 00:35:53.457 00:35:53.457 Firmware Slot Information 00:35:53.457 ========================= 00:35:53.457 Active slot: 0 00:35:53.457 00:35:53.457 Asymmetric Namespace Access 00:35:53.457 =========================== 00:35:53.457 Change Count : 0 00:35:53.457 Number of ANA Group Descriptors : 1 00:35:53.457 ANA Group Descriptor : 0 00:35:53.457 ANA Group ID : 1 00:35:53.457 Number of NSID Values : 1 00:35:53.457 Change Count : 0 00:35:53.457 ANA State : 1 00:35:53.457 Namespace Identifier : 1 00:35:53.457 00:35:53.457 Commands Supported and Effects 00:35:53.457 ============================== 00:35:53.457 Admin Commands 00:35:53.457 -------------- 00:35:53.457 Get Log Page (02h): Supported 00:35:53.457 Identify (06h): Supported 00:35:53.457 Abort (08h): Supported 00:35:53.457 Set Features (09h): Supported 00:35:53.457 Get Features (0Ah): Supported 00:35:53.457 Asynchronous Event Request (0Ch): Supported 00:35:53.457 Keep Alive (18h): Supported 00:35:53.457 I/O Commands 00:35:53.457 ------------ 00:35:53.457 Flush (00h): Supported 00:35:53.457 Write (01h): Supported LBA-Change 00:35:53.457 Read (02h): Supported 00:35:53.457 Write Zeroes (08h): Supported LBA-Change 00:35:53.457 Dataset Management (09h): Supported 00:35:53.457 00:35:53.457 Error Log 00:35:53.457 ========= 00:35:53.457 Entry: 0 00:35:53.457 Error Count: 0x3 00:35:53.457 Submission Queue Id: 0x0 00:35:53.457 Command Id: 0x5 00:35:53.457 Phase Bit: 0 00:35:53.457 Status Code: 0x2 00:35:53.457 Status Code Type: 0x0 00:35:53.457 Do Not Retry: 1 00:35:53.457 Error Location: 0x28 00:35:53.457 LBA: 0x0 00:35:53.457 Namespace: 0x0 00:35:53.457 Vendor Log Page: 0x0 00:35:53.457 ----------- 00:35:53.457 Entry: 1 00:35:53.457 Error Count: 0x2 00:35:53.457 Submission Queue Id: 0x0 00:35:53.457 Command Id: 0x5 00:35:53.457 Phase Bit: 0 00:35:53.457 Status Code: 0x2 00:35:53.457 Status Code Type: 0x0 00:35:53.457 Do Not Retry: 1 00:35:53.457 Error Location: 0x28 00:35:53.457 LBA: 0x0 00:35:53.457 Namespace: 0x0 00:35:53.457 Vendor Log Page: 0x0 00:35:53.457 ----------- 00:35:53.457 Entry: 2 00:35:53.457 Error Count: 0x1 00:35:53.457 Submission Queue Id: 0x0 00:35:53.457 Command Id: 0x4 00:35:53.457 Phase Bit: 0 00:35:53.457 Status Code: 0x2 00:35:53.457 Status Code Type: 0x0 00:35:53.457 Do Not Retry: 1 00:35:53.457 Error Location: 0x28 00:35:53.457 LBA: 0x0 00:35:53.457 Namespace: 0x0 00:35:53.457 Vendor Log Page: 0x0 00:35:53.457 00:35:53.458 Number of Queues 00:35:53.458 ================ 00:35:53.458 Number of I/O Submission Queues: 128 00:35:53.458 Number of I/O Completion Queues: 128 00:35:53.458 00:35:53.458 ZNS Specific Controller Data 00:35:53.458 ============================ 00:35:53.458 Zone Append Size Limit: 0 00:35:53.458 00:35:53.458 00:35:53.458 Active Namespaces 00:35:53.458 ================= 00:35:53.458 get_feature(0x05) failed 00:35:53.458 Namespace ID:1 00:35:53.458 Command Set Identifier: NVM (00h) 00:35:53.458 Deallocate: Supported 00:35:53.458 Deallocated/Unwritten Error: Not Supported 00:35:53.458 Deallocated Read Value: Unknown 00:35:53.458 Deallocate in Write Zeroes: Not Supported 00:35:53.458 Deallocated Guard Field: 0xFFFF 00:35:53.458 Flush: Supported 00:35:53.458 Reservation: Not Supported 00:35:53.458 Namespace Sharing Capabilities: Multiple Controllers 00:35:53.458 Size (in LBAs): 1310720 (5GiB) 00:35:53.458 Capacity (in LBAs): 1310720 (5GiB) 00:35:53.458 Utilization (in LBAs): 1310720 (5GiB) 00:35:53.458 UUID: 95e7f4b9-0a6e-4fbb-b6d8-16c1ef394192 00:35:53.458 Thin Provisioning: Not Supported 00:35:53.458 Per-NS Atomic Units: Yes 00:35:53.458 Atomic Boundary Size (Normal): 0 00:35:53.458 Atomic Boundary Size (PFail): 0 00:35:53.458 Atomic Boundary Offset: 0 00:35:53.458 NGUID/EUI64 Never Reused: No 00:35:53.458 ANA group ID: 1 00:35:53.458 Namespace Write Protected: No 00:35:53.458 Number of LBA Formats: 1 00:35:53.458 Current LBA Format: LBA Format #00 00:35:53.458 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:35:53.458 00:35:53.458 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:53.458 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:53.458 17:13:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:53.458 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:53.458 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:53.458 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:53.458 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:53.458 rmmod nvme_tcp 00:35:53.458 rmmod nvme_fabrics 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:53.717 17:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:54.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:54.652 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:54.652 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:54.652 00:35:54.652 real 0m3.464s 00:35:54.652 user 0m1.123s 00:35:54.652 sys 0m1.761s 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:54.652 ************************************ 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:54.652 END TEST nvmf_identify_kernel_target 00:35:54.652 ************************************ 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.652 ************************************ 00:35:54.652 START TEST nvmf_auth_host 00:35:54.652 ************************************ 00:35:54.652 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:54.911 * Looking for test storage... 00:35:54.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.911 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:54.912 Cannot find device "nvmf_tgt_br" 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:54.912 Cannot find device "nvmf_tgt_br2" 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:54.912 Cannot find device "nvmf_tgt_br" 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:54.912 Cannot find device "nvmf_tgt_br2" 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:54.912 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:55.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:55.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:55.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:55.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:35:55.170 00:35:55.170 --- 10.0.0.2 ping statistics --- 00:35:55.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.170 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:55.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:55.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:35:55.170 00:35:55.170 --- 10.0.0.3 ping statistics --- 00:35:55.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.170 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:55.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:55.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:35:55.170 00:35:55.170 --- 10.0.0.1 ping statistics --- 00:35:55.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.170 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=86635 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 86635 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 86635 ']' 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.170 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:55.171 17:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=827b0e5f41d6ebd7a70f148b79ffac9d 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2ac 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 827b0e5f41d6ebd7a70f148b79ffac9d 0 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 827b0e5f41d6ebd7a70f148b79ffac9d 0 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=827b0e5f41d6ebd7a70f148b79ffac9d 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2ac 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2ac 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.2ac 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=00c72240fc597c10fcc2522407f8d78d5ebcf20db7ce2483034a05a58e9f017f 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kDv 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 00c72240fc597c10fcc2522407f8d78d5ebcf20db7ce2483034a05a58e9f017f 3 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 00c72240fc597c10fcc2522407f8d78d5ebcf20db7ce2483034a05a58e9f017f 3 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=00c72240fc597c10fcc2522407f8d78d5ebcf20db7ce2483034a05a58e9f017f 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:56.545 17:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:56.545 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kDv 00:35:56.545 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kDv 00:35:56.545 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kDv 00:35:56.545 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:56.545 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1493630a26247d8c5dad1e1c989accfbb167c28c434ed754 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QX1 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1493630a26247d8c5dad1e1c989accfbb167c28c434ed754 0 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1493630a26247d8c5dad1e1c989accfbb167c28c434ed754 0 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1493630a26247d8c5dad1e1c989accfbb167c28c434ed754 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QX1 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QX1 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QX1 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3d04a317c6a38defd96102cf69360d3350b65b7d45abb1d2 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YuX 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3d04a317c6a38defd96102cf69360d3350b65b7d45abb1d2 2 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3d04a317c6a38defd96102cf69360d3350b65b7d45abb1d2 2 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3d04a317c6a38defd96102cf69360d3350b65b7d45abb1d2 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:56.546 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YuX 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YuX 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.YuX 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3905f60bf498b435075b75e372456488 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9lk 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3905f60bf498b435075b75e372456488 1 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3905f60bf498b435075b75e372456488 1 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3905f60bf498b435075b75e372456488 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9lk 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9lk 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.9lk 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e1eb5a156c00900df7b2a34a813ad83 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.c2N 00:35:56.804 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e1eb5a156c00900df7b2a34a813ad83 1 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e1eb5a156c00900df7b2a34a813ad83 1 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e1eb5a156c00900df7b2a34a813ad83 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.c2N 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.c2N 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.c2N 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1416e285d29d876a05fdcffb2269de27051e4c635d0fb4f8 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2ag 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1416e285d29d876a05fdcffb2269de27051e4c635d0fb4f8 2 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1416e285d29d876a05fdcffb2269de27051e4c635d0fb4f8 2 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1416e285d29d876a05fdcffb2269de27051e4c635d0fb4f8 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2ag 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2ag 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2ag 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc6df804832207819a85b447b3c07cf7 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CaO 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc6df804832207819a85b447b3c07cf7 0 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc6df804832207819a85b447b3c07cf7 0 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc6df804832207819a85b447b3c07cf7 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:56.805 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CaO 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CaO 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.CaO 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b26bb194339effb2a3d08be1843bc3381b5fa65864f3192ff4230a2c6167fc3e 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lXN 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b26bb194339effb2a3d08be1843bc3381b5fa65864f3192ff4230a2c6167fc3e 3 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b26bb194339effb2a3d08be1843bc3381b5fa65864f3192ff4230a2c6167fc3e 3 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b26bb194339effb2a3d08be1843bc3381b5fa65864f3192ff4230a2c6167fc3e 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lXN 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lXN 00:35:57.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lXN 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 86635 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 86635 ']' 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:57.063 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2ac 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kDv ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kDv 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QX1 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.YuX ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YuX 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.9lk 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.c2N ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.c2N 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2ag 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.CaO ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.CaO 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lXN 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:57.321 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:57.322 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:57.322 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:57.322 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:57.322 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:57.322 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:57.322 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:57.322 17:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:57.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:57.887 Waiting for block devices as requested 00:35:57.887 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:58.856 No valid GPT data, bailing 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:35:58.856 No valid GPT data, bailing 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:35:58.856 No valid GPT data, bailing 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:58.856 No valid GPT data, bailing 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:58.856 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:59.113 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -a 10.0.0.1 -t tcp -s 4420 00:35:59.114 00:35:59.114 Discovery Log Number of Records 2, Generation counter 2 00:35:59.114 =====Discovery Log Entry 0====== 00:35:59.114 trtype: tcp 00:35:59.114 adrfam: ipv4 00:35:59.114 subtype: current discovery subsystem 00:35:59.114 treq: not specified, sq flow control disable supported 00:35:59.114 portid: 1 00:35:59.114 trsvcid: 4420 00:35:59.114 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:59.114 traddr: 10.0.0.1 00:35:59.114 eflags: none 00:35:59.114 sectype: none 00:35:59.114 =====Discovery Log Entry 1====== 00:35:59.114 trtype: tcp 00:35:59.114 adrfam: ipv4 00:35:59.114 subtype: nvme subsystem 00:35:59.114 treq: not specified, sq flow control disable supported 00:35:59.114 portid: 1 00:35:59.114 trsvcid: 4420 00:35:59.114 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:59.114 traddr: 10.0.0.1 00:35:59.114 eflags: none 00:35:59.114 sectype: none 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.114 nvme0n1 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.114 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.372 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.373 nvme0n1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.373 17:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.631 nvme0n1 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.631 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.632 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.890 nvme0n1 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.890 nvme0n1 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.890 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.147 nvme0n1 00:36:00.147 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.148 17:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.405 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.664 nvme0n1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.664 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.927 nvme0n1 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:00.927 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.928 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.196 nvme0n1 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:01.196 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.197 nvme0n1 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.197 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.455 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.456 nvme0n1 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.456 17:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.456 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.022 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.023 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:02.023 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.023 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.023 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.281 nvme0n1 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.281 17:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.540 nvme0n1 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.540 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.799 nvme0n1 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:02.799 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.800 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.058 nvme0n1 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.058 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:03.059 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.333 nvme0n1 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.333 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:03.334 17:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:05.233 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.234 nvme0n1 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.234 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.492 17:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.776 nvme0n1 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.776 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.777 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.037 nvme0n1 00:36:06.037 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.037 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.037 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.037 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.037 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.296 17:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.555 nvme0n1 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:06.555 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.556 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.125 nvme0n1 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.125 17:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.693 nvme0n1 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.693 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.259 nvme0n1 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.259 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.517 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.517 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.517 17:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.084 nvme0n1 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:09.084 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.085 17:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.650 nvme0n1 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.650 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.217 nvme0n1 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.217 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.476 nvme0n1 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.476 17:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.476 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.735 nvme0n1 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.735 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.736 nvme0n1 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.736 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.995 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.996 nvme0n1 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.996 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.255 nvme0n1 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.255 nvme0n1 00:36:11.255 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.256 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.256 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.256 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.256 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.514 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.515 17:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.515 nvme0n1 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.515 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.774 nvme0n1 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.774 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.032 nvme0n1 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.032 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.291 nvme0n1 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.291 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.549 nvme0n1 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:12.549 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.550 17:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.808 nvme0n1 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:12.808 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.809 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.068 nvme0n1 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.068 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.327 nvme0n1 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.327 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.587 nvme0n1 00:36:13.587 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.587 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.587 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.587 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.587 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.587 17:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.587 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.845 nvme0n1 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.846 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.413 nvme0n1 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.413 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.414 17:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.672 nvme0n1 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.673 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.271 nvme0n1 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.271 17:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 nvme0n1 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.535 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.102 nvme0n1 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.102 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.103 17:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.669 nvme0n1 00:36:16.669 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.669 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.669 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.669 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.669 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.669 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.927 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.493 nvme0n1 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.493 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.494 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.494 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.494 17:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.118 nvme0n1 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.118 17:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.684 nvme0n1 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.684 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.685 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.943 nvme0n1 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.943 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.944 nvme0n1 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.944 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.203 nvme0n1 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.203 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.462 nvme0n1 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.462 17:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.462 nvme0n1 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.721 nvme0n1 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.721 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.980 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.981 nvme0n1 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.981 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.239 nvme0n1 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.240 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.498 nvme0n1 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.498 17:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.498 nvme0n1 00:36:20.499 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.499 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.499 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.499 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.499 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.757 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.015 nvme0n1 00:36:21.015 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.016 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.274 nvme0n1 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.274 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.275 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.564 nvme0n1 00:36:21.564 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.564 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.564 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.564 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.564 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.565 17:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.565 nvme0n1 00:36:21.565 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.565 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.565 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.565 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.565 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.565 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.833 nvme0n1 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.833 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:22.100 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.101 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.359 nvme0n1 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.359 17:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.926 nvme0n1 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.926 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.184 nvme0n1 00:36:23.184 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.184 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.185 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.443 17:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.701 nvme0n1 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:23.701 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.702 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.269 nvme0n1 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI3YjBlNWY0MWQ2ZWJkN2E3MGYxNDhiNzlmZmFjOWTSqgdZ: 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBjNzIyNDBmYzU5N2MxMGZjYzI1MjI0MDdmOGQ3OGQ1ZWJjZjIwZGI3Y2UyNDgzMDM0YTA1YTU4ZTlmMDE3ZuH+k+s=: 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.269 17:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.848 nvme0n1 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.848 17:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.415 nvme0n1 00:36:25.415 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.415 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.415 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.415 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.415 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.415 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzkwNWY2MGJmNDk4YjQzNTA3NWI3NWUzNzI0NTY0ODh7W3ot: 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWUxZWI1YTE1NmMwMDkwMGRmN2IyYTM0YTgxM2FkODOzlbTr: 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.671 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.233 nvme0n1 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQxNmUyODVkMjlkODc2YTA1ZmRjZmZiMjI2OWRlMjcwNTFlNGM2MzVkMGZiNGY4eBBLWg==: 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M2ZGY4MDQ4MzIyMDc4MTlhODViNDQ3YjNjMDdjZje4yvnH: 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.234 17:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.798 nvme0n1 00:36:26.798 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.798 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.798 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.798 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.798 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.798 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2YmIxOTQzMzllZmZiMmEzZDA4YmUxODQzYmMzMzgxYjVmYTY1ODY0ZjMxOTJmZjQyMzBhMmM2MTY3ZmMzZSrAujs=: 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.056 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.057 17:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.621 nvme0n1 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5MzYzMGEyNjI0N2Q4YzVkYWQxZTFjOTg5YWNjZmJiMTY3YzI4YzQzNGVkNzU0+pqCHg==: 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2QwNGEzMTdjNmEzOGRlZmQ5NjEwMmNmNjkzNjBkMzM1MGI2NWI3ZDQ1YWJiMWQy2x8vtg==: 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:27.621 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.622 request: 00:36:27.622 { 00:36:27.622 "name": "nvme0", 00:36:27.622 "trtype": "tcp", 00:36:27.622 "traddr": "10.0.0.1", 00:36:27.622 "adrfam": "ipv4", 00:36:27.622 "trsvcid": "4420", 00:36:27.622 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:27.622 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:27.622 "prchk_reftag": false, 00:36:27.622 "prchk_guard": false, 00:36:27.622 "hdgst": false, 00:36:27.622 "ddgst": false, 00:36:27.622 "method": "bdev_nvme_attach_controller", 00:36:27.622 "req_id": 1 00:36:27.622 } 00:36:27.622 Got JSON-RPC error response 00:36:27.622 response: 00:36:27.622 { 00:36:27.622 "code": -5, 00:36:27.622 "message": "Input/output error" 00:36:27.622 } 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.622 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.880 request: 00:36:27.880 { 00:36:27.880 "name": "nvme0", 00:36:27.880 "trtype": "tcp", 00:36:27.880 "traddr": "10.0.0.1", 00:36:27.880 "adrfam": "ipv4", 00:36:27.880 "trsvcid": "4420", 00:36:27.880 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:27.880 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:27.880 "prchk_reftag": false, 00:36:27.880 "prchk_guard": false, 00:36:27.880 "hdgst": false, 00:36:27.880 "ddgst": false, 00:36:27.880 "dhchap_key": "key2", 00:36:27.880 "method": "bdev_nvme_attach_controller", 00:36:27.880 "req_id": 1 00:36:27.880 } 00:36:27.880 Got JSON-RPC error response 00:36:27.880 response: 00:36:27.880 { 00:36:27.880 "code": -5, 00:36:27.880 "message": "Input/output error" 00:36:27.880 } 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.880 request: 00:36:27.880 { 00:36:27.880 "name": "nvme0", 00:36:27.880 "trtype": "tcp", 00:36:27.880 "traddr": "10.0.0.1", 00:36:27.880 "adrfam": "ipv4", 00:36:27.880 "trsvcid": "4420", 00:36:27.880 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:27.880 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:27.880 "prchk_reftag": false, 00:36:27.880 "prchk_guard": false, 00:36:27.880 "hdgst": false, 00:36:27.880 "ddgst": false, 00:36:27.880 "dhchap_key": "key1", 00:36:27.880 "dhchap_ctrlr_key": "ckey2", 00:36:27.880 "method": "bdev_nvme_attach_controller", 00:36:27.880 "req_id": 1 00:36:27.880 } 00:36:27.880 Got JSON-RPC error response 00:36:27.880 response: 00:36:27.880 { 00:36:27.880 "code": -5, 00:36:27.880 "message": "Input/output error" 00:36:27.880 } 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:27.880 rmmod nvme_tcp 00:36:27.880 rmmod nvme_fabrics 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:27.880 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 86635 ']' 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 86635 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 86635 ']' 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 86635 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86635 00:36:27.881 killing process with pid 86635 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86635' 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 86635 00:36:27.881 17:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 86635 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:29.269 17:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:30.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:30.214 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:30.214 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:30.473 17:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2ac /tmp/spdk.key-null.QX1 /tmp/spdk.key-sha256.9lk /tmp/spdk.key-sha384.2ag /tmp/spdk.key-sha512.lXN /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:36:30.473 17:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:30.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:30.735 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:30.735 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:30.998 00:36:30.998 real 0m36.124s 00:36:30.998 user 0m31.996s 00:36:30.998 sys 0m4.568s 00:36:30.998 ************************************ 00:36:30.998 END TEST nvmf_auth_host 00:36:30.998 ************************************ 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.998 ************************************ 00:36:30.998 START TEST nvmf_digest 00:36:30.998 ************************************ 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:30.998 * Looking for test storage... 00:36:30.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.998 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:30.999 Cannot find device "nvmf_tgt_br" 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:36:30.999 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:31.255 Cannot find device "nvmf_tgt_br2" 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:31.255 Cannot find device "nvmf_tgt_br" 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:31.255 Cannot find device "nvmf_tgt_br2" 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:31.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:31.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:31.255 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:31.256 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:31.256 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:31.256 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:31.256 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:31.256 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:31.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:31.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:36:31.514 00:36:31.514 --- 10.0.0.2 ping statistics --- 00:36:31.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.514 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:31.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:31.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:36:31.514 00:36:31.514 --- 10.0.0.3 ping statistics --- 00:36:31.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.514 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:31.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:31.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:36:31.514 00:36:31.514 --- 10.0.0.1 ping statistics --- 00:36:31.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.514 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:31.514 ************************************ 00:36:31.514 START TEST nvmf_digest_clean 00:36:31.514 ************************************ 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:31.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=88206 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 88206 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 88206 ']' 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:31.514 17:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:31.771 [2024-07-22 17:14:33.139509] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:31.771 [2024-07-22 17:14:33.139696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.771 [2024-07-22 17:14:33.329660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.029 [2024-07-22 17:14:33.598011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:32.029 [2024-07-22 17:14:33.598083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:32.029 [2024-07-22 17:14:33.598099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:32.029 [2024-07-22 17:14:33.598115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:32.029 [2024-07-22 17:14:33.598127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:32.029 [2024-07-22 17:14:33.598184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.595 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:32.859 [2024-07-22 17:14:34.426977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:36:33.118 null0 00:36:33.118 [2024-07-22 17:14:34.584150] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.118 [2024-07-22 17:14:34.608364] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88238 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88238 /var/tmp/bperf.sock 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 88238 ']' 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:33.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:33.118 17:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:33.376 [2024-07-22 17:14:34.774036] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:33.376 [2024-07-22 17:14:34.774475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88238 ] 00:36:33.376 [2024-07-22 17:14:34.960471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.942 [2024-07-22 17:14:35.293378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.200 17:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.200 17:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:34.200 17:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:34.200 17:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:34.200 17:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:34.853 [2024-07-22 17:14:36.219915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:36:34.853 17:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:34.853 17:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:35.418 nvme0n1 00:36:35.418 17:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:35.418 17:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:35.418 Running I/O for 2 seconds... 00:36:37.318 00:36:37.318 Latency(us) 00:36:37.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.318 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:37.318 nvme0n1 : 2.01 12528.42 48.94 0.00 0.00 10209.75 9175.04 28835.84 00:36:37.318 =================================================================================================================== 00:36:37.318 Total : 12528.42 48.94 0.00 0.00 10209.75 9175.04 28835.84 00:36:37.318 0 00:36:37.318 17:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:37.318 17:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:37.318 17:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:37.318 17:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:37.318 17:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:37.318 | select(.opcode=="crc32c") 00:36:37.318 | "\(.module_name) \(.executed)"' 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88238 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 88238 ']' 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 88238 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88238 00:36:37.576 killing process with pid 88238 00:36:37.576 Received shutdown signal, test time was about 2.000000 seconds 00:36:37.576 00:36:37.576 Latency(us) 00:36:37.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.576 =================================================================================================================== 00:36:37.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88238' 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 88238 00:36:37.576 17:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 88238 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88315 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88315 /var/tmp/bperf.sock 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 88315 ']' 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:39.474 17:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:39.474 [2024-07-22 17:14:40.701428] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:39.474 [2024-07-22 17:14:40.701851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88315 ] 00:36:39.474 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:39.474 Zero copy mechanism will not be used. 00:36:39.474 [2024-07-22 17:14:40.888439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.732 [2024-07-22 17:14:41.182628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.991 17:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:39.991 17:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:39.991 17:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:39.991 17:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:39.991 17:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:40.577 [2024-07-22 17:14:42.027121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:36:40.577 17:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:40.577 17:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.142 nvme0n1 00:36:41.142 17:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:41.142 17:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.142 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:41.142 Zero copy mechanism will not be used. 00:36:41.142 Running I/O for 2 seconds... 00:36:43.040 00:36:43.040 Latency(us) 00:36:43.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.040 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:43.040 nvme0n1 : 2.00 6743.44 842.93 0.00 0.00 2369.00 2137.72 7957.94 00:36:43.040 =================================================================================================================== 00:36:43.040 Total : 6743.44 842.93 0.00 0.00 2369.00 2137.72 7957.94 00:36:43.040 0 00:36:43.040 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:43.040 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:43.040 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:43.040 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:43.040 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:43.040 | select(.opcode=="crc32c") 00:36:43.040 | "\(.module_name) \(.executed)"' 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88315 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 88315 ']' 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 88315 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:43.298 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88315 00:36:43.556 killing process with pid 88315 00:36:43.556 Received shutdown signal, test time was about 2.000000 seconds 00:36:43.556 00:36:43.556 Latency(us) 00:36:43.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.556 =================================================================================================================== 00:36:43.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:43.556 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:43.556 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:43.556 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88315' 00:36:43.556 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 88315 00:36:43.556 17:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 88315 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88389 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88389 /var/tmp/bperf.sock 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 88389 ']' 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:44.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:44.931 17:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:45.190 [2024-07-22 17:14:46.558232] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:45.190 [2024-07-22 17:14:46.558622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88389 ] 00:36:45.190 [2024-07-22 17:14:46.725583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.449 [2024-07-22 17:14:46.996211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.016 17:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:46.016 17:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:46.016 17:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:46.016 17:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:46.016 17:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:46.583 [2024-07-22 17:14:48.067831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:36:46.842 17:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:46.842 17:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.100 nvme0n1 00:36:47.100 17:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:47.100 17:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.100 Running I/O for 2 seconds... 00:36:49.632 00:36:49.632 Latency(us) 00:36:49.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.632 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:49.632 nvme0n1 : 2.01 14537.69 56.79 0.00 0.00 8796.05 7895.53 18100.42 00:36:49.632 =================================================================================================================== 00:36:49.632 Total : 14537.69 56.79 0.00 0.00 8796.05 7895.53 18100.42 00:36:49.632 0 00:36:49.632 17:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:49.632 17:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:49.632 17:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:49.632 17:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:49.632 | select(.opcode=="crc32c") 00:36:49.632 | "\(.module_name) \(.executed)"' 00:36:49.632 17:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88389 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 88389 ']' 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 88389 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88389 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88389' 00:36:49.632 killing process with pid 88389 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 88389 00:36:49.632 Received shutdown signal, test time was about 2.000000 seconds 00:36:49.632 00:36:49.632 Latency(us) 00:36:49.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.632 =================================================================================================================== 00:36:49.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.632 17:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 88389 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:51.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88467 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88467 /var/tmp/bperf.sock 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 88467 ']' 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:51.009 17:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:51.009 [2024-07-22 17:14:52.548924] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:51.009 [2024-07-22 17:14:52.549624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88467 ] 00:36:51.009 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:51.009 Zero copy mechanism will not be used. 00:36:51.268 [2024-07-22 17:14:52.760867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.527 [2024-07-22 17:14:53.089845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.153 17:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:52.153 17:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:52.153 17:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:52.153 17:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:52.153 17:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:52.411 [2024-07-22 17:14:53.957913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:36:52.669 17:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.670 17:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.928 nvme0n1 00:36:52.928 17:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:52.928 17:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:53.187 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:53.187 Zero copy mechanism will not be used. 00:36:53.187 Running I/O for 2 seconds... 00:36:55.088 00:36:55.088 Latency(us) 00:36:55.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.088 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:55.088 nvme0n1 : 2.00 6166.49 770.81 0.00 0.00 2589.22 1856.85 6678.43 00:36:55.088 =================================================================================================================== 00:36:55.088 Total : 6166.49 770.81 0.00 0.00 2589.22 1856.85 6678.43 00:36:55.088 0 00:36:55.088 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:55.088 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:55.088 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:55.088 | select(.opcode=="crc32c") 00:36:55.088 | "\(.module_name) \(.executed)"' 00:36:55.088 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:55.088 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88467 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 88467 ']' 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 88467 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88467 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88467' 00:36:55.347 killing process with pid 88467 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 88467 00:36:55.347 Received shutdown signal, test time was about 2.000000 seconds 00:36:55.347 00:36:55.347 Latency(us) 00:36:55.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.347 =================================================================================================================== 00:36:55.347 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:55.347 17:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 88467 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 88206 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 88206 ']' 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 88206 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88206 00:36:57.306 killing process with pid 88206 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88206' 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 88206 00:36:57.306 17:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 88206 00:36:58.682 ************************************ 00:36:58.682 END TEST nvmf_digest_clean 00:36:58.682 ************************************ 00:36:58.682 00:36:58.682 real 0m26.971s 00:36:58.682 user 0m50.331s 00:36:58.682 sys 0m5.685s 00:36:58.682 17:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:58.682 17:14:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:58.682 ************************************ 00:36:58.682 START TEST nvmf_digest_error 00:36:58.682 ************************************ 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:58.682 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=88581 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 88581 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 88581 ']' 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:58.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:58.683 17:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.683 [2024-07-22 17:15:00.135765] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:58.683 [2024-07-22 17:15:00.135929] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:58.941 [2024-07-22 17:15:00.307721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.200 [2024-07-22 17:15:00.620837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:59.200 [2024-07-22 17:15:00.620902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:59.200 [2024-07-22 17:15:00.620916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:59.200 [2024-07-22 17:15:00.620947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:59.200 [2024-07-22 17:15:00.620958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:59.200 [2024-07-22 17:15:00.621023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.459 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.716 [2024-07-22 17:15:01.081966] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:59.716 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:59.716 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:59.716 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:59.716 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.716 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.975 [2024-07-22 17:15:01.373942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:36:59.975 null0 00:36:59.975 [2024-07-22 17:15:01.534849] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:59.975 [2024-07-22 17:15:01.559012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88619 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88619 /var/tmp/bperf.sock 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 88619 ']' 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:59.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:59.975 17:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:00.235 [2024-07-22 17:15:01.693477] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:00.235 [2024-07-22 17:15:01.693937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88619 ] 00:37:00.494 [2024-07-22 17:15:01.878703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.752 [2024-07-22 17:15:02.136516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.010 [2024-07-22 17:15:02.413444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:37:01.010 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:01.010 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:01.010 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:01.010 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:01.269 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:01.269 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.269 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:01.269 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.269 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:01.269 17:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:01.873 nvme0n1 00:37:01.873 17:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:01.873 17:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.873 17:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:01.873 17:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.873 17:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:01.873 17:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:01.873 Running I/O for 2 seconds... 00:37:01.873 [2024-07-22 17:15:03.395451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:01.873 [2024-07-22 17:15:03.395836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.873 [2024-07-22 17:15:03.395969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.873 [2024-07-22 17:15:03.415675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:01.873 [2024-07-22 17:15:03.415763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.873 [2024-07-22 17:15:03.415788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.873 [2024-07-22 17:15:03.434783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:01.873 [2024-07-22 17:15:03.434896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.873 [2024-07-22 17:15:03.434918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.873 [2024-07-22 17:15:03.454742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:01.873 [2024-07-22 17:15:03.454845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.873 [2024-07-22 17:15:03.454871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:01.873 [2024-07-22 17:15:03.474948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:01.873 [2024-07-22 17:15:03.475048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.873 [2024-07-22 17:15:03.475070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.494203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.494296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.494336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.514150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.514241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.514278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.533307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.533380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.533420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.551872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.551967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.551991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.570367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.570442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.570479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.588554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.588615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.588638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.607575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.607643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.607663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.626297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.626364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.626387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.645671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.645746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.645766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.664662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.664741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.664778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.683126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.683198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.683222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.702454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.702551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.702573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.722754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.722833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.722858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.136 [2024-07-22 17:15:03.742058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.136 [2024-07-22 17:15:03.742145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.136 [2024-07-22 17:15:03.742166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.761665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.761743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.761775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.780653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.780730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.780751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.799769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.799838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.799875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.818450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.818518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.818558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.836818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.836895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.836915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.855011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.855075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.855097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.873390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.873457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.873477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.891497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.891578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.891598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.910057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.910128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.910151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.928771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.928859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.928880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.946936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.946995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.947021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.965690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.965753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.965779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:03.984615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:03.984688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:03.984709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.395 [2024-07-22 17:15:04.003886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.395 [2024-07-22 17:15:04.003982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.395 [2024-07-22 17:15:04.004021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.654 [2024-07-22 17:15:04.022944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.654 [2024-07-22 17:15:04.023021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.654 [2024-07-22 17:15:04.023041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.654 [2024-07-22 17:15:04.040923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.654 [2024-07-22 17:15:04.040984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.654 [2024-07-22 17:15:04.041006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.654 [2024-07-22 17:15:04.059398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.654 [2024-07-22 17:15:04.059459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.654 [2024-07-22 17:15:04.059484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.654 [2024-07-22 17:15:04.077316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.654 [2024-07-22 17:15:04.077389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.654 [2024-07-22 17:15:04.077420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.654 [2024-07-22 17:15:04.097530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.097613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.097637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.116837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.116915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.116936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.134685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.134749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.152565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.152623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.152648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.170503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.170566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.170584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.188181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.188242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.188281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.206124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.206188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.206209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.224796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.224863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.224882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.242751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.242804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.242826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.655 [2024-07-22 17:15:04.260954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.655 [2024-07-22 17:15:04.261041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.655 [2024-07-22 17:15:04.261061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.280534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.280620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.280644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.299522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.299599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.299638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.318908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.318997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.319018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.338229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.338319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.338343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.358514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.358596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.358617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.377861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.377927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.377950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.396520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.396591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.396611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.415009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.415073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.415096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.433642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.433713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.433749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.452735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.452991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.453111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.471788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.472025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.472058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.491257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.491370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.491392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:02.914 [2024-07-22 17:15:04.510698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:02.914 [2024-07-22 17:15:04.510780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.914 [2024-07-22 17:15:04.510808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.531650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.531780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.531817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.550667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.550738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.550760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.569487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.569583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.569613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.597226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.597330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.597351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.616037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.616131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.616153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.634486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.634562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.634585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.652662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.652739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.652760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.671570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.671642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.671665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.690529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.690612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.690632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.708848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.708946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.708965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.726819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.726892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.726913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.744058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.744137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.744156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.761231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.761306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.761327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.174 [2024-07-22 17:15:04.779728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.174 [2024-07-22 17:15:04.779793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.174 [2024-07-22 17:15:04.779815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.798475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.798542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.798561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.817386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.817456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.817479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.837940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.838038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.838060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.856801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.856883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.856907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.875499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.875570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.875590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.893993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.894072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.894110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.912482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.912550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.912574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.930833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.930915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.930935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.949905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.949985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.950013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.969022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.969109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.969134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:04.987601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:04.987682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:04.987701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:05.005733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:05.005801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:05.005843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:05.024527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:05.024605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:05.024626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.434 [2024-07-22 17:15:05.043272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.434 [2024-07-22 17:15:05.043340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.434 [2024-07-22 17:15:05.043364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.693 [2024-07-22 17:15:05.062350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.693 [2024-07-22 17:15:05.062421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.693 [2024-07-22 17:15:05.062460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.081213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.081309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.081346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.099372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.099434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.099453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.117545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.117616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.117635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.138083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.138172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.138195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.156801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.156880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.156899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.174866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.174937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.174956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.193502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.193579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.193598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.212690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.212771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.212792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.231196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.231289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.249540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.249619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.249638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.267408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.267487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.267504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.285676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.285741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.285760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.694 [2024-07-22 17:15:05.303285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.694 [2024-07-22 17:15:05.303357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.694 [2024-07-22 17:15:05.303376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.988 [2024-07-22 17:15:05.321287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.988 [2024-07-22 17:15:05.321347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.988 [2024-07-22 17:15:05.321364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.988 [2024-07-22 17:15:05.338123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.988 [2024-07-22 17:15:05.338185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.988 [2024-07-22 17:15:05.338203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.988 [2024-07-22 17:15:05.355236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.988 [2024-07-22 17:15:05.355313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.988 [2024-07-22 17:15:05.355330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.988 [2024-07-22 17:15:05.373105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:03.988 [2024-07-22 17:15:05.373176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.988 [2024-07-22 17:15:05.373196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.988 00:37:03.988 Latency(us) 00:37:03.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.988 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:03.988 nvme0n1 : 2.01 13467.52 52.61 0.00 0.00 9495.98 8238.81 36700.16 00:37:03.988 =================================================================================================================== 00:37:03.988 Total : 13467.52 52.61 0.00 0.00 9495.98 8238.81 36700.16 00:37:03.988 0 00:37:03.988 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:03.988 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:03.988 | .driver_specific 00:37:03.988 | .nvme_error 00:37:03.988 | .status_code 00:37:03.988 | .command_transient_transport_error' 00:37:03.988 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:03.988 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 106 > 0 )) 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88619 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 88619 ']' 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 88619 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88619 00:37:04.245 killing process with pid 88619 00:37:04.245 Received shutdown signal, test time was about 2.000000 seconds 00:37:04.245 00:37:04.245 Latency(us) 00:37:04.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.245 =================================================================================================================== 00:37:04.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88619' 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 88619 00:37:04.245 17:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 88619 00:37:05.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:05.615 17:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:05.615 17:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:05.615 17:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:05.615 17:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:05.615 17:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88686 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88686 /var/tmp/bperf.sock 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 88686 ']' 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:05.615 17:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:05.615 [2024-07-22 17:15:07.124114] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:05.615 [2024-07-22 17:15:07.124577] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88686 ] 00:37:05.615 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:05.615 Zero copy mechanism will not be used. 00:37:05.873 [2024-07-22 17:15:07.309310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.131 [2024-07-22 17:15:07.571623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.389 [2024-07-22 17:15:07.850345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:37:06.646 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:06.646 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:06.646 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:06.646 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:06.905 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:06.905 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.905 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:06.905 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.905 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:06.905 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:07.163 nvme0n1 00:37:07.163 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:07.163 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.163 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:07.163 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.163 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:07.163 17:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:07.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:07.421 Zero copy mechanism will not be used. 00:37:07.421 Running I/O for 2 seconds... 00:37:07.421 [2024-07-22 17:15:08.870463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.421 [2024-07-22 17:15:08.870541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.421 [2024-07-22 17:15:08.870565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.421 [2024-07-22 17:15:08.875896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.421 [2024-07-22 17:15:08.875963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.421 [2024-07-22 17:15:08.875988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.421 [2024-07-22 17:15:08.881357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.421 [2024-07-22 17:15:08.881417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.421 [2024-07-22 17:15:08.881453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.421 [2024-07-22 17:15:08.886844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.421 [2024-07-22 17:15:08.886909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.421 [2024-07-22 17:15:08.886928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.421 [2024-07-22 17:15:08.892377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.892445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.892469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.898078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.898147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.898175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.903494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.903554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.903572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.908790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.908859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.908884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.914257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.914321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.914344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.919645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.919699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.919722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.924899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.924963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.924987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.930246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.930311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.930328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.935567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.935623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.935641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.940862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.940914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.940936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.946073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.946123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.946150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.951516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.951574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.951593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.956967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.957028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.957047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.962458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.962523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.962570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.967860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.967943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.967967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.973310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.973396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.973417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.978945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.979018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.979054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.984181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.984236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.984278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.989532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.989586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.989609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:08.994862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:08.994925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:08.994943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:09.000079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:09.000138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:09.000157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:09.005429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:09.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:09.005514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:09.010733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:09.010798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:09.010826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:09.016196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:09.016265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:09.016289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:09.021547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:09.021607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:09.021624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:09.026839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:09.026896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.422 [2024-07-22 17:15:09.026914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.422 [2024-07-22 17:15:09.031961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.422 [2024-07-22 17:15:09.032022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.423 [2024-07-22 17:15:09.032051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.423 [2024-07-22 17:15:09.037492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.423 [2024-07-22 17:15:09.037543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.423 [2024-07-22 17:15:09.037565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.681 [2024-07-22 17:15:09.042988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.681 [2024-07-22 17:15:09.043075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.681 [2024-07-22 17:15:09.043093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.681 [2024-07-22 17:15:09.048548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.681 [2024-07-22 17:15:09.048607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.681 [2024-07-22 17:15:09.048627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.681 [2024-07-22 17:15:09.053909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.681 [2024-07-22 17:15:09.053960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.681 [2024-07-22 17:15:09.053981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.681 [2024-07-22 17:15:09.059116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.059170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.059191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.064549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.064606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.064625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.070003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.070063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.070080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.075358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.075402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.075422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.080538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.080587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.080609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.086018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.086071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.086094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.091870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.091929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.091948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.097122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.097179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.097198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.102274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.102340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.107534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.107590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.107619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.112864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.112927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.112946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.118309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.118377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.118395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.123638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.123693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.123714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.129326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.129404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.129446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.134876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.134944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.134979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.140494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.140553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.140572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.145786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.145839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.145862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.151104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.151153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.151174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.156142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.156201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.156219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.161135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.161189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.161207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.166930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.167004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.167030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.172543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.172598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.172621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.177661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.177708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.177727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.182616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.182668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.182701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.187951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.682 [2024-07-22 17:15:09.188009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.682 [2024-07-22 17:15:09.188029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.682 [2024-07-22 17:15:09.193521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.193569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.193590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.198615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.198662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.198683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.203970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.204028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.204047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.209405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.209483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.209513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.214862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.214916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.214944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.220392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.220446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.220469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.225686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.225744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.225763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.230999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.231059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.231078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.236386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.236438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.236460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.241882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.241933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.241955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.247465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.247517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.247539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.253016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.253082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.253101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.259129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.259222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.259270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.265745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.265805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.265835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.271282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.271340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.271375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.276694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.276756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.276775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.282192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.282266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.282286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.287567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.287629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.287653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.683 [2024-07-22 17:15:09.293057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.683 [2024-07-22 17:15:09.293119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.683 [2024-07-22 17:15:09.293142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.299119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.299178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.299200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.304604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.304664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.304682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.309944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.310000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.310017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.315349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.315412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.315446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.320769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.320833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.320863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.326484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.326538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.326556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.331605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.331677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.331712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.336743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.336799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.336822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.342131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.342181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.342203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.347965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.348018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.353025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.353081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.353099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.358459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.358524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.358544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.363538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.363588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.363614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.368954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.369008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.369030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.374554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.374613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.374647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.379722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.379774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.379807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.384741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.942 [2024-07-22 17:15:09.384793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.942 [2024-07-22 17:15:09.384815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.942 [2024-07-22 17:15:09.390289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.390335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.390356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.395498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.395555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.395572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.400674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.400733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.400752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.406220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.406283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.406305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.411646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.411700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.411727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.416888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.416960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.416979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.422122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.422184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.422202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.427335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.427396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.427425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.432975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.433027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.433052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.438316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.438365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.438387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.443519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.443579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.443596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.449215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.449297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.449317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.454715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.454770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.454792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.459831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.459923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.459951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.465089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.465161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.465214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.470744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.470804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.470822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.476010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.476059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.476081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.481713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.481784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.481815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.487344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.487411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.487431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.493651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.493715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.493734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.499438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.499492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.499527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.504896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.504948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.505002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.511030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.511097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.511145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.516542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.516601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.516621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.521888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.521964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.521988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.527714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.943 [2024-07-22 17:15:09.527775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.943 [2024-07-22 17:15:09.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.943 [2024-07-22 17:15:09.533176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.944 [2024-07-22 17:15:09.533232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.944 [2024-07-22 17:15:09.533271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:07.944 [2024-07-22 17:15:09.538377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.944 [2024-07-22 17:15:09.538423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.944 [2024-07-22 17:15:09.538438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.944 [2024-07-22 17:15:09.544091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.944 [2024-07-22 17:15:09.544145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.944 [2024-07-22 17:15:09.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:07.944 [2024-07-22 17:15:09.549242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.944 [2024-07-22 17:15:09.549303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.944 [2024-07-22 17:15:09.549319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:07.944 [2024-07-22 17:15:09.554141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:07.944 [2024-07-22 17:15:09.554190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.944 [2024-07-22 17:15:09.554207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.203 [2024-07-22 17:15:09.559530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.203 [2024-07-22 17:15:09.559583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.203 [2024-07-22 17:15:09.559603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.203 [2024-07-22 17:15:09.565309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.203 [2024-07-22 17:15:09.565358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.203 [2024-07-22 17:15:09.565376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.203 [2024-07-22 17:15:09.570356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.203 [2024-07-22 17:15:09.570399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.203 [2024-07-22 17:15:09.570415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.203 [2024-07-22 17:15:09.575431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.203 [2024-07-22 17:15:09.575475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.203 [2024-07-22 17:15:09.575491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.203 [2024-07-22 17:15:09.580691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.203 [2024-07-22 17:15:09.580755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.203 [2024-07-22 17:15:09.580781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.203 [2024-07-22 17:15:09.586436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.203 [2024-07-22 17:15:09.586493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.586510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.591762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.591823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.591843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.596960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.597024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.597050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.603929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.604010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.604029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.610007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.610059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.610076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.615698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.615753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.615772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.621213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.621283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.621302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.626673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.626725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.626743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.631993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.632050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.632076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.637524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.637576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.637595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.642778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.642827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.642845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.647929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.647979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.648014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.653366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.653425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.653460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.658890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.658955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.658974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.664384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.664434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.664453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.669697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.669760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.669785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.674903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.674949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.674965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.680264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.680314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.680333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.685649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.685714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.685732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.691003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.691057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.691074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.696467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.696526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.696544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.701746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.701799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.701817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.707313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.707375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.707400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.712887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.712944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.712963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.718405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.718456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.718473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.724399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.724455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.724474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.204 [2024-07-22 17:15:09.730169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.204 [2024-07-22 17:15:09.730237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.204 [2024-07-22 17:15:09.730284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.735672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.735724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.735742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.741264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.741330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.741349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.746455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.746500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.746516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.751357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.751402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.751418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.756602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.756653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.756671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.762138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.762189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.762206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.767412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.767463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.767480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.772646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.772702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.772721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.777957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.778009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.778026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.783510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.783571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.783597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.788866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.788919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.788937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.794170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.794219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.794253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.799437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.799482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.799499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.804946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.805008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.805033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.810476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.810525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.810542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.205 [2024-07-22 17:15:09.815965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.205 [2024-07-22 17:15:09.816023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.205 [2024-07-22 17:15:09.816042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.821555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.821606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.821625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.827048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.827105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.827154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.832446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.832496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.832514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.837829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.837880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.837899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.843111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.843161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.843178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.848413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.848465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.848483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.853754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.853804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.853821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.859330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.859379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.859415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.864790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.864842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.864861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.870339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.870396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.870414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.875953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.876012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.876031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.881663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.881721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.881740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.886935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.886985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.887002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.892483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.892540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.892560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.897634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.897685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.897701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.902782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.902831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.902847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.907919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.907978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.908002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.913093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.913144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.913161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.918332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.918376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.918392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.923527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.923573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.923595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.929081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.929134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.929152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.934539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.934585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.934601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.939577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.939624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.939642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.944816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.944881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.465 [2024-07-22 17:15:09.944899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.465 [2024-07-22 17:15:09.950578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.465 [2024-07-22 17:15:09.950640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.950665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.955964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.956015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.956034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.961234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.961305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.961324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.966634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.966686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.966703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.971769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.971824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.971840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.977274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.977322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.977339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.982695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.982755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.982796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.988285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.988343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.988362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.993905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.993960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.993977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:09.999373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:09.999425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:09.999442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.005048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.005124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.005150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.010818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.010883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.010903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.016563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.016626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.016646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.022123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.022184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.022203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.028187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.028265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.028286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.033782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.033839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.033874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.039409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.039464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.039483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.045002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.045058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.045077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.050386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.050437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.050455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.055778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.055836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.055870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.061479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.061537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.061556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.067047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.067107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.067126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.072496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.072551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.072571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.466 [2024-07-22 17:15:10.077909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.466 [2024-07-22 17:15:10.077969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.466 [2024-07-22 17:15:10.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.083834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.083917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.083943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.089504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.089568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.089587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.094887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.094946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.094964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.100366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.100422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.100441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.106230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.106306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.106326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.112023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.112080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.112099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.117606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.117668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.117687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.123080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.123140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.123160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.128692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.128759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.726 [2024-07-22 17:15:10.128780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.726 [2024-07-22 17:15:10.134101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.726 [2024-07-22 17:15:10.134161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.134181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.139599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.139661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.139678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.145072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.145131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.145149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.150367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.150420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.150439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.155687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.155751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.155792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.161123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.161182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.161201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.166668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.166737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.166755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.172034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.172090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.172109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.177427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.177477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.177494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.182937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.182996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.183016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.188464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.188518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.188537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.193869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.193922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.193957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.199236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.199301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.199320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.204404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.204455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.204492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.209671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.209722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.209739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.215055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.215121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.215150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.220665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.220723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.220758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.226032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.226100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.226118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.231546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.231610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.231629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.237206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.237300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.237329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.242738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.242830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.242855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.248516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.248584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.248603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.254004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.254072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.254092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.259799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.259897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.727 [2024-07-22 17:15:10.259918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.727 [2024-07-22 17:15:10.265107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.727 [2024-07-22 17:15:10.265160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.265195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.270494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.270544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.270561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.275679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.275733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.275753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.281176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.281227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.281259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.286577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.286632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.286650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.292188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.292242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.292274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.297629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.297685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.297703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.302981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.303035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.303054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.308235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.308298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.308316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.313668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.313721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.313739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.319205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.319271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.319297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.324697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.324753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.324771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.330006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.330056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.330072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.335072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.335138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.335155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.728 [2024-07-22 17:15:10.340206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.728 [2024-07-22 17:15:10.340267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.728 [2024-07-22 17:15:10.340286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.988 [2024-07-22 17:15:10.345736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.988 [2024-07-22 17:15:10.345800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.988 [2024-07-22 17:15:10.345824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.988 [2024-07-22 17:15:10.351125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.988 [2024-07-22 17:15:10.351178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.988 [2024-07-22 17:15:10.351195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.988 [2024-07-22 17:15:10.356302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.988 [2024-07-22 17:15:10.356352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.988 [2024-07-22 17:15:10.356370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.988 [2024-07-22 17:15:10.361651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.988 [2024-07-22 17:15:10.361697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.988 [2024-07-22 17:15:10.361730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.988 [2024-07-22 17:15:10.366620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.366666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.366683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.372017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.372072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.372091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.377460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.377521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.377542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.382939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.383000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.383017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.388831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.388909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.394509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.394558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.394576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.399732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.399780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.399797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.405023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.405072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.405089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.410469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.410517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.410535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.415631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.415697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.420761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.420811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.420846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.425981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.426030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.426046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.431315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.431369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.431386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.436579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.436666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.436684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.441945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.442027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.442046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.447445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.447520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.447546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.453134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.453206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.453225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.458605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.458704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.458724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.464051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.464110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.464130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.469335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.469400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.469418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.474835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.474893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.474912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.480214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.480282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.480318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.485471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.485522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.485540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.490745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.490800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.490818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.496146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.496207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.496226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.989 [2024-07-22 17:15:10.501601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.989 [2024-07-22 17:15:10.501662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.989 [2024-07-22 17:15:10.501680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.506994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.507045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.507079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.512455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.512508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.512526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.517832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.517890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.517909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.523225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.523291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.523310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.528631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.528688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.528707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.533951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.534006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.534039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.539159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.539215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.539233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.544560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.544618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.544637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.550152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.550227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.550265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.555667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.555729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.555747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.561084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.561148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.561167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.566537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.566598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.566617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.572055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.572115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.572134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.577702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.577775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.577799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.582976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.583048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.583067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.588313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.588363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.588381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.593687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.593740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.593757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.598819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.598869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.598886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.990 [2024-07-22 17:15:10.604184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:08.990 [2024-07-22 17:15:10.604239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.990 [2024-07-22 17:15:10.604272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.609986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.610041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.610076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.615496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.615549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.615568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.621139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.621203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.621227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.626739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.626800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.626819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.632224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.632295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.632315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.637653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.637716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.637736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.643332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.643386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.643405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.649466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.649533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.649559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.655214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.655307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.655339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.660531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.660588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.660608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.666158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.666226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.666260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.671912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.671968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.671987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.677566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.677626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.677645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.683205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.683283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.683304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.688971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.689048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.689067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.694582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.694637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.694671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.700303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.700371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.700396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.706025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.706081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.706099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.711972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.712039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.712058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.717490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.717580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.717600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.250 [2024-07-22 17:15:10.723594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.250 [2024-07-22 17:15:10.723667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.250 [2024-07-22 17:15:10.723689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.729509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.729576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.729595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.735420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.735518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.735546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.741618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.741702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.741724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.748155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.748237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.748273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.753805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.753899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.753928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.759543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.759611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.759630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.765399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.765455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.765473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.771057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.771119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.771137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.776455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.776511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.776531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.782081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.782147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.782176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.787622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.787677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.787696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.793662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.793724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.793744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.799375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.799428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.799464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.805164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.805259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.810886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.810943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.810960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.816498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.816566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.816586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.821871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.821931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.821949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.827710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.827774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.827794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.833436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.833496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.833516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.839051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.839123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.839143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.844798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.844856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.844874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.850119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.850181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.850207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.855610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.855662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.855697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.251 [2024-07-22 17:15:10.861300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.251 [2024-07-22 17:15:10.861358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.251 [2024-07-22 17:15:10.861383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.509 [2024-07-22 17:15:10.866698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:37:09.510 [2024-07-22 17:15:10.866748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.510 [2024-07-22 17:15:10.866766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.510 00:37:09.510 Latency(us) 00:37:09.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.510 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:09.510 nvme0n1 : 2.00 5665.36 708.17 0.00 0.00 2819.93 2200.14 6865.68 00:37:09.510 =================================================================================================================== 00:37:09.510 Total : 5665.36 708.17 0.00 0.00 2819.93 2200.14 6865.68 00:37:09.510 0 00:37:09.510 17:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:09.510 17:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:09.510 17:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:09.510 17:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:09.510 | .driver_specific 00:37:09.510 | .nvme_error 00:37:09.510 | .status_code 00:37:09.510 | .command_transient_transport_error' 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 366 > 0 )) 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88686 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 88686 ']' 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 88686 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88686 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:09.768 killing process with pid 88686 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88686' 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 88686 00:37:09.768 Received shutdown signal, test time was about 2.000000 seconds 00:37:09.768 00:37:09.768 Latency(us) 00:37:09.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.768 =================================================================================================================== 00:37:09.768 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:09.768 17:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 88686 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88763 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88763 /var/tmp/bperf.sock 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 88763 ']' 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:11.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.142 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.143 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:11.143 17:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:11.414 [2024-07-22 17:15:12.851923] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:11.414 [2024-07-22 17:15:12.852069] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88763 ] 00:37:11.671 [2024-07-22 17:15:13.034434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.928 [2024-07-22 17:15:13.433385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.186 [2024-07-22 17:15:13.700065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:37:12.445 17:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.445 17:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:12.445 17:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:12.445 17:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:12.704 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:12.704 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.704 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:12.704 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.704 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:12.704 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:12.962 nvme0n1 00:37:12.962 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:12.962 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.962 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:12.962 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.962 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:12.962 17:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:12.962 Running I/O for 2 seconds... 00:37:12.962 [2024-07-22 17:15:14.521847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:37:12.962 [2024-07-22 17:15:14.524730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.962 [2024-07-22 17:15:14.524803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:12.962 [2024-07-22 17:15:14.539239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:37:12.962 [2024-07-22 17:15:14.542136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.962 [2024-07-22 17:15:14.542194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:12.962 [2024-07-22 17:15:14.557222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:37:12.962 [2024-07-22 17:15:14.560058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.962 [2024-07-22 17:15:14.560136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:12.962 [2024-07-22 17:15:14.575343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:37:12.962 [2024-07-22 17:15:14.578210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.962 [2024-07-22 17:15:14.578285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.593393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:37:13.222 [2024-07-22 17:15:14.596108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.596161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.610640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:37:13.222 [2024-07-22 17:15:14.613274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.613332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.627047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:37:13.222 [2024-07-22 17:15:14.629618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.629681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.643492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:37:13.222 [2024-07-22 17:15:14.646073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.646122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.660442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:37:13.222 [2024-07-22 17:15:14.662885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.662929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.676141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:37:13.222 [2024-07-22 17:15:14.678714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.678765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.692174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:37:13.222 [2024-07-22 17:15:14.694768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.694838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.709267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:37:13.222 [2024-07-22 17:15:14.711768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.711823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.726364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:37:13.222 [2024-07-22 17:15:14.728931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.728997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.743630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:37:13.222 [2024-07-22 17:15:14.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.746174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.760167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:37:13.222 [2024-07-22 17:15:14.762425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.762469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.775665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:37:13.222 [2024-07-22 17:15:14.778118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.778174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.792719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:37:13.222 [2024-07-22 17:15:14.795024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.795081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.809265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:13.222 [2024-07-22 17:15:14.811477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.811520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:13.222 [2024-07-22 17:15:14.826051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:37:13.222 [2024-07-22 17:15:14.828681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.222 [2024-07-22 17:15:14.828737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:13.534 [2024-07-22 17:15:14.843877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:37:13.534 [2024-07-22 17:15:14.846335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.534 [2024-07-22 17:15:14.846385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:13.534 [2024-07-22 17:15:14.861227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:13.535 [2024-07-22 17:15:14.863379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.863436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.877821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:37:13.535 [2024-07-22 17:15:14.880183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.880242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.895456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:13.535 [2024-07-22 17:15:14.897873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.897944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.913238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:37:13.535 [2024-07-22 17:15:14.915437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.915488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.930533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:37:13.535 [2024-07-22 17:15:14.932893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.932945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.948441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:37:13.535 [2024-07-22 17:15:14.950639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.950689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.965135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:37:13.535 [2024-07-22 17:15:14.967091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.967150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.981427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:37:13.535 [2024-07-22 17:15:14.983414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.983472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:14.997503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:37:13.535 [2024-07-22 17:15:14.999414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:14.999461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:15.013683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:37:13.535 [2024-07-22 17:15:15.015698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:15.015745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:15.029798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:37:13.535 [2024-07-22 17:15:15.031818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:15.031881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:15.046453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:37:13.535 [2024-07-22 17:15:15.048618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:15.048679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:15.063035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:13.535 [2024-07-22 17:15:15.065217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:15.065297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:15.080375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:37:13.535 [2024-07-22 17:15:15.082564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:15.082617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:15.098174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:37:13.535 [2024-07-22 17:15:15.100285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:15.100339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:13.535 [2024-07-22 17:15:15.116423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:37:13.535 [2024-07-22 17:15:15.118525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.535 [2024-07-22 17:15:15.118582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.133451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:37:13.794 [2024-07-22 17:15:15.135358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.135415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.149371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:37:13.794 [2024-07-22 17:15:15.151162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.151217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.165674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:37:13.794 [2024-07-22 17:15:15.167600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.167652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.182917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:37:13.794 [2024-07-22 17:15:15.184885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.184943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.199495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:37:13.794 [2024-07-22 17:15:15.201418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.201470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.215525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:37:13.794 [2024-07-22 17:15:15.217359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.217419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.232078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:37:13.794 [2024-07-22 17:15:15.233912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.233966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.248128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:37:13.794 [2024-07-22 17:15:15.249981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.250036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.264165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:37:13.794 [2024-07-22 17:15:15.265912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.265957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.280196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:37:13.794 [2024-07-22 17:15:15.281909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.281967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.295866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:37:13.794 [2024-07-22 17:15:15.297646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.311582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:37:13.794 [2024-07-22 17:15:15.313374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.313420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.327921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:37:13.794 [2024-07-22 17:15:15.329691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.329752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.345244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:37:13.794 [2024-07-22 17:15:15.346911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.346960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.361699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:37:13.794 [2024-07-22 17:15:15.363368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.363429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.378916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:37:13.794 [2024-07-22 17:15:15.380698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.380764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:13.794 [2024-07-22 17:15:15.396784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:37:13.794 [2024-07-22 17:15:15.398495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.794 [2024-07-22 17:15:15.398552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.414368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:37:14.052 [2024-07-22 17:15:15.416055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.416110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.431490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:37:14.052 [2024-07-22 17:15:15.433148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.433202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.448546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:37:14.052 [2024-07-22 17:15:15.450243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.450301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.465046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:37:14.052 [2024-07-22 17:15:15.466633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.466691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.482248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:37:14.052 [2024-07-22 17:15:15.483772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.483830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.499040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:37:14.052 [2024-07-22 17:15:15.500583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.500636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.515041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:37:14.052 [2024-07-22 17:15:15.516589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.516639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.531937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:37:14.052 [2024-07-22 17:15:15.533464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.533512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.548656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:14.052 [2024-07-22 17:15:15.550122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.550181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.565046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:37:14.052 [2024-07-22 17:15:15.566416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.566474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.581585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:37:14.052 [2024-07-22 17:15:15.583026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.583076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.606510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:37:14.052 [2024-07-22 17:15:15.609477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.052 [2024-07-22 17:15:15.609539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.052 [2024-07-22 17:15:15.625210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:37:14.052 [2024-07-22 17:15:15.628067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.053 [2024-07-22 17:15:15.628144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:14.053 [2024-07-22 17:15:15.643510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:14.053 [2024-07-22 17:15:15.646188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.053 [2024-07-22 17:15:15.646256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:14.053 [2024-07-22 17:15:15.660322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:37:14.053 [2024-07-22 17:15:15.662892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.053 [2024-07-22 17:15:15.662940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.677055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:37:14.311 [2024-07-22 17:15:15.679664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.679709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.693274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:37:14.311 [2024-07-22 17:15:15.695722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.695803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.709428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:37:14.311 [2024-07-22 17:15:15.711938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.711994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.726208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:37:14.311 [2024-07-22 17:15:15.728864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.728921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.743909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:37:14.311 [2024-07-22 17:15:15.746567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.746624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.762457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:37:14.311 [2024-07-22 17:15:15.765134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.765188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.779677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:37:14.311 [2024-07-22 17:15:15.782193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.782241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.796758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:37:14.311 [2024-07-22 17:15:15.799321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.799401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.813866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:37:14.311 [2024-07-22 17:15:15.816320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.816381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.830177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:37:14.311 [2024-07-22 17:15:15.832662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.832723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.846668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:37:14.311 [2024-07-22 17:15:15.849166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.849229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.863884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:37:14.311 [2024-07-22 17:15:15.866433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.866516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.881552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:37:14.311 [2024-07-22 17:15:15.884029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.884105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.899520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:37:14.311 [2024-07-22 17:15:15.902025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.902104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:14.311 [2024-07-22 17:15:15.918010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:37:14.311 [2024-07-22 17:15:15.920602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.311 [2024-07-22 17:15:15.920694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:14.570 [2024-07-22 17:15:15.936597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:37:14.570 [2024-07-22 17:15:15.938871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.570 [2024-07-22 17:15:15.938929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:14.570 [2024-07-22 17:15:15.953646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:37:14.570 [2024-07-22 17:15:15.956009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.570 [2024-07-22 17:15:15.956066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:14.570 [2024-07-22 17:15:15.971238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:37:14.570 [2024-07-22 17:15:15.973595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.570 [2024-07-22 17:15:15.973648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:14.570 [2024-07-22 17:15:15.987980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:37:14.570 [2024-07-22 17:15:15.990274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:15.990346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.004616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:37:14.571 [2024-07-22 17:15:16.006861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.006921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.021023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:37:14.571 [2024-07-22 17:15:16.023198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.023273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.037795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:37:14.571 [2024-07-22 17:15:16.040011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.040094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.054410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:37:14.571 [2024-07-22 17:15:16.056651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.056705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.070882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:37:14.571 [2024-07-22 17:15:16.073074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.073127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.087788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:37:14.571 [2024-07-22 17:15:16.089977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.090038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.105004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:37:14.571 [2024-07-22 17:15:16.107103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.107171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.122357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:37:14.571 [2024-07-22 17:15:16.124523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.124586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.139308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:14.571 [2024-07-22 17:15:16.141490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.141544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.156973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:37:14.571 [2024-07-22 17:15:16.159047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.159102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:14.571 [2024-07-22 17:15:16.174569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:37:14.571 [2024-07-22 17:15:16.176702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.571 [2024-07-22 17:15:16.176760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.191684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:37:14.831 [2024-07-22 17:15:16.193720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.193784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.208625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:37:14.831 [2024-07-22 17:15:16.210586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.210646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.224875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:37:14.831 [2024-07-22 17:15:16.226842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.226908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.241598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:37:14.831 [2024-07-22 17:15:16.243559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.243616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.258636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:37:14.831 [2024-07-22 17:15:16.260563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.260617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.274936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:37:14.831 [2024-07-22 17:15:16.276818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.276886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.291109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:37:14.831 [2024-07-22 17:15:16.292964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.293029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.307111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:14.831 [2024-07-22 17:15:16.308991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.309044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.322914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:37:14.831 [2024-07-22 17:15:16.324691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.324743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.338703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:14.831 [2024-07-22 17:15:16.340400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.340460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.354359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:37:14.831 [2024-07-22 17:15:16.356076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.356134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.370613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:37:14.831 [2024-07-22 17:15:16.372358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.372412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.386841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:14.831 [2024-07-22 17:15:16.388690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.388748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.404174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:37:14.831 [2024-07-22 17:15:16.406005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.406058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.421410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:37:14.831 [2024-07-22 17:15:16.423078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.423167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:14.831 [2024-07-22 17:15:16.437593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:37:14.831 [2024-07-22 17:15:16.439129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.831 [2024-07-22 17:15:16.439191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:15.091 [2024-07-22 17:15:16.453662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:37:15.091 [2024-07-22 17:15:16.455240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.091 [2024-07-22 17:15:16.455316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:15.091 [2024-07-22 17:15:16.469922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:37:15.091 [2024-07-22 17:15:16.471489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.091 [2024-07-22 17:15:16.471539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:15.091 [2024-07-22 17:15:16.485710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:37:15.091 [2024-07-22 17:15:16.487255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.091 [2024-07-22 17:15:16.487323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:15.091 [2024-07-22 17:15:16.502095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:37:15.091 [2024-07-22 17:15:16.503719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.091 [2024-07-22 17:15:16.503787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:15.091 00:37:15.091 Latency(us) 00:37:15.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:15.091 nvme0n1 : 2.01 15016.22 58.66 0.00 0.00 8515.87 2699.46 32705.58 00:37:15.091 =================================================================================================================== 00:37:15.091 Total : 15016.22 58.66 0.00 0.00 8515.87 2699.46 32705.58 00:37:15.091 0 00:37:15.091 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:15.091 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:15.091 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:15.091 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:15.091 | .driver_specific 00:37:15.091 | .nvme_error 00:37:15.091 | .status_code 00:37:15.091 | .command_transient_transport_error' 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88763 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 88763 ']' 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 88763 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88763 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:15.349 killing process with pid 88763 00:37:15.349 Received shutdown signal, test time was about 2.000000 seconds 00:37:15.349 00:37:15.349 Latency(us) 00:37:15.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.349 =================================================================================================================== 00:37:15.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88763' 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 88763 00:37:15.349 17:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 88763 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88830 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88830 /var/tmp/bperf.sock 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 88830 ']' 00:37:16.806 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.807 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.807 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:16.807 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.807 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.807 17:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:16.807 [2024-07-22 17:15:18.266937] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:16.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:16.807 Zero copy mechanism will not be used. 00:37:16.807 [2024-07-22 17:15:18.267073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88830 ] 00:37:17.065 [2024-07-22 17:15:18.434687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.324 [2024-07-22 17:15:18.692536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.584 [2024-07-22 17:15:18.955171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:37:17.584 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:17.584 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:17.584 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:17.584 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:18.152 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:18.152 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.152 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.152 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.152 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.152 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.152 nvme0n1 00:37:18.417 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:18.417 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.417 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.417 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.417 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:18.417 17:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.417 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:18.417 Zero copy mechanism will not be used. 00:37:18.417 Running I/O for 2 seconds... 00:37:18.417 [2024-07-22 17:15:19.980286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:19.980745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:19.980812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:19.986391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:19.986846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:19.986915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:19.992298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:19.992731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:19.992790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:19.997981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:19.998405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:19.998474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:20.003677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:20.004127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:20.004200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:20.009490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:20.009890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:20.009937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:20.014857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:20.014965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:20.015001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:20.020340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:20.020440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:20.020486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:20.025773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:20.025861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:20.025904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.417 [2024-07-22 17:15:20.031236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.417 [2024-07-22 17:15:20.031341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.417 [2024-07-22 17:15:20.031374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.036669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.036766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.036799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.042084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.042168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.042213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.047175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.047285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.047317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.052219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.052331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.052362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.057451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.057531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.057568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.062544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.062619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.062657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.067708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.067795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.067826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.072976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.073075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.073105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.077990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.078080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.078120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.083059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.083156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.083196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.088237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.088362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.088396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.093611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.093726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.093759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.099002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.099084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.099125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.104302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.104399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.104442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.109555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.109658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.109693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.114941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.115055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.115091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.120200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.120303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.120360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.125418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.125501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.125545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.677 [2024-07-22 17:15:20.130359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.677 [2024-07-22 17:15:20.130467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.677 [2024-07-22 17:15:20.130499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.135478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.135586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.135618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.141140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.141247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.141312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.146433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.146516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.146555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.151400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.151503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.151535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.156708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.156806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.156839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.161939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.162024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.162064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.167639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.167760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.167832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.173199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.173323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.173361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.178661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.178754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.178795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.184142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.184303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.189550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.189641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.189672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.194960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.195081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.195115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.200307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.200393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.200436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.205605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.205690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.205731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.210876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.210971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.211003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.216301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.216396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.216429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.221562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.221643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.221681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.226880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.226961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.226999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.232193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.232306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.232341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.237645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.237744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.237789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.243015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.243105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.243152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.248421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.248529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.248565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.253686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.253784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.253817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.258793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.258891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.258932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.263916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.264002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.264049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.269173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.269315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.269348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.678 [2024-07-22 17:15:20.274420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.678 [2024-07-22 17:15:20.274526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.678 [2024-07-22 17:15:20.274562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.679 [2024-07-22 17:15:20.279576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.679 [2024-07-22 17:15:20.279667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.679 [2024-07-22 17:15:20.279712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.679 [2024-07-22 17:15:20.284915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.679 [2024-07-22 17:15:20.285027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.679 [2024-07-22 17:15:20.285070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.679 [2024-07-22 17:15:20.290357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.679 [2024-07-22 17:15:20.290459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.679 [2024-07-22 17:15:20.290494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.938 [2024-07-22 17:15:20.295757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.938 [2024-07-22 17:15:20.295875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.938 [2024-07-22 17:15:20.295928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.938 [2024-07-22 17:15:20.301215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.938 [2024-07-22 17:15:20.301319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.938 [2024-07-22 17:15:20.301363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.306580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.306668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.306713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.311966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.312067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.317333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.317425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.317458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.322717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.322802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.322844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.328117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.328212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.328277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.333543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.333644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.333681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.338859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.338964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.339000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.344275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.344366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.344421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.349648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.349753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.349789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.354907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.355022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.355057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.360328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.360415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.360458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.365564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.365650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.365690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.370602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.370690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.370721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.375894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.375989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.376021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.381079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.381167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.381210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.386391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.386481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.386525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.391663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.391771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.391806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.396836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.396931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.396976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.401948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.402020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.402059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.407159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.407255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.407308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.412364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.412458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.412490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.417397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.417483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.417512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.422578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.422681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.422722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.427618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.427692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.427746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.432816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.432913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.432944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.437854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.437948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.939 [2024-07-22 17:15:20.437979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.939 [2024-07-22 17:15:20.443072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.939 [2024-07-22 17:15:20.443165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.443217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.448282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.448368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.448408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.453452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.453538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.453570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.458522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.458607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.458637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.463617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.463696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.463736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.468690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.468765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.468804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.473826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.473915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.473945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.478967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.479063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.479092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.484020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.484102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.484146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.489422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.489500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.489539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.494548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.494666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.494698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.499735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.499862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.499895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.505029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.505113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.505155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.510460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.510565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.510601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.515910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.516014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.516049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.521217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.521318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.521377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.526430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.526534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.526578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.531584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.531689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.531720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.536900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.537013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.537044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.542217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.542344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.542391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.547483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.547562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.547608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.940 [2024-07-22 17:15:20.552838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:18.940 [2024-07-22 17:15:20.552944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.940 [2024-07-22 17:15:20.552977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.558292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.558397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.205 [2024-07-22 17:15:20.558429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.563617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.563711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.205 [2024-07-22 17:15:20.563750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.568971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.569067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.205 [2024-07-22 17:15:20.569108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.574232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.574352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.205 [2024-07-22 17:15:20.574394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.579583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.579700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.205 [2024-07-22 17:15:20.579734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.584955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.585044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.205 [2024-07-22 17:15:20.585089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.590440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.590562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.205 [2024-07-22 17:15:20.590594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.205 [2024-07-22 17:15:20.595629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.205 [2024-07-22 17:15:20.595748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.595781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.601040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.601125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.601169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.606439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.606550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.606589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.611933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.612044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.612078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.617542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.617653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.617687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.622971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.623056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.623101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.628461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.628550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.628597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.633924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.634088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.634127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.639614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.639754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.639793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.645039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.645134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.645189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.650421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.650503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.650551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.655748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.655889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.655924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.661075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.661189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.661230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.666279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.666387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.666432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.671601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.671695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.671730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.677008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.677111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.677144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.682283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.682383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.682435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.687586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.687685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.687729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.692902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.693038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.693070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.206 [2024-07-22 17:15:20.698169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.206 [2024-07-22 17:15:20.698312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.206 [2024-07-22 17:15:20.698345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.704321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.704450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.704500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.709876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.709971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.710020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.715550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.715698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.715736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.721063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.721216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.721275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.726595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.726684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.726732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.732065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.732213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.732268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.737579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.737728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.737762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.742854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.742968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.743013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.748229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.748327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.748374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.753320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.753417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.753448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.758641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.758737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.758770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.764001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.764093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.764136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.769336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.769426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.769466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.774602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.774711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.774742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.780145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.780282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.780317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.785608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.785712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.785759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.791068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.791172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.791220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.796640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.796759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.796799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.802066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.207 [2024-07-22 17:15:20.802161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.207 [2024-07-22 17:15:20.802212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.207 [2024-07-22 17:15:20.807435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.208 [2024-07-22 17:15:20.807555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.208 [2024-07-22 17:15:20.807602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.208 [2024-07-22 17:15:20.813131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.208 [2024-07-22 17:15:20.813273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.208 [2024-07-22 17:15:20.813326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.208 [2024-07-22 17:15:20.818739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.208 [2024-07-22 17:15:20.818857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.208 [2024-07-22 17:15:20.818894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.824150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.824261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.824308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.829666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.829765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.829814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.835125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.835233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.835294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.840706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.840850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.840889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.846064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.846160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.846203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.851359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.851453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.851503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.856757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.856879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.856918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.862352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.862470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.862512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.867991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.868104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.868156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.873692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.873821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.873862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.879493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.879656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.879698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.468 [2024-07-22 17:15:20.885228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.468 [2024-07-22 17:15:20.885351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.468 [2024-07-22 17:15:20.885417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.890858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.890976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.891020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.896566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.896682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.896725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.902302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.902416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.902457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.908054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.908166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.908209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.913584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.913713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.913755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.919174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.919313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.919353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.924745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.924864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.924926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.930331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.930438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.930478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.935915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.936026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.936067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.941483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.941601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.941640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.946983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.947091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.947130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.952642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.952751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.952793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.958068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.958168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.958208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.963713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.963832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.963904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.969295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.969400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.969438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.974721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.974813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.974850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.980115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.980222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.980262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.985965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.986069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.986108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.991531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.991664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.991702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:20.996942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:20.997057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:20.997095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:21.002332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:21.002420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:21.002456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:21.007572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:21.007684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:21.007717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:21.012960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:21.013046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:21.013079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:21.018200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:21.018356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:21.018388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:21.023511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:21.023637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:21.023668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:21.028802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:21.028904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.469 [2024-07-22 17:15:21.028936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.469 [2024-07-22 17:15:21.034205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.469 [2024-07-22 17:15:21.034302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.034333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.039447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.039548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.039580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.044684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.044772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.044805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.049894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.050032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.050063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.055044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.055158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.055189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.060510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.060604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.060640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.065683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.065761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.065791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.070856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.070965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.070998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.076081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.076215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.076250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.470 [2024-07-22 17:15:21.081533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.470 [2024-07-22 17:15:21.081619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.470 [2024-07-22 17:15:21.081652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.729 [2024-07-22 17:15:21.086834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.086933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.086965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.092054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.092153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.092187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.097489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.097590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.097621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.102796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.102893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.102925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.107858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.107954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.107988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.113054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.113183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.113216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.118338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.118431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.118463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.123365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.123457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.123488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.128467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.128610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.128642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.133948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.134073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.134104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.139295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.139387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.139418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.144630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.144766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.144798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.149918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.150006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.155344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.155441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.155472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.160658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.160744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.160777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.166018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.166106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.166139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.171376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.171516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.171553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.176932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.177111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.177146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.182286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.182415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.182452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.187555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.187665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.187702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.192987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.193109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.193147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.198351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.198495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.198529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.203618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.203736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.203770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.208986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.209103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.209136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.214190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.214311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.214342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.219201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.219288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.730 [2024-07-22 17:15:21.219318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.730 [2024-07-22 17:15:21.224694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.730 [2024-07-22 17:15:21.224802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.224836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.230084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.230219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.230249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.235131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.235294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.235325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.240439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.240557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.245706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.245796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.245826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.250900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.250992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.251022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.256037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.256187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.256218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.261488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.261607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.261638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.266390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.266468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.266511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.271208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.271313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.271342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.276309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.276398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.276431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.281667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.281744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.281792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.286839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.286928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.286957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.291953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.292079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.292111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.297231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.297347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.297378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.302265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.302380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.302410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.307480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.307566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.307597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.312826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.312940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.312973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.317965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.318069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.318100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.323231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.323373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.323403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.328358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.328456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.328490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.333470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.333589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.333618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.338672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.338747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.338778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.731 [2024-07-22 17:15:21.343834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.731 [2024-07-22 17:15:21.343958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.731 [2024-07-22 17:15:21.343990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.349016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.349133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.349165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.354550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.354680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.354712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.359954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.360044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.360076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.365238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.365369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.365399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.370548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.370655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.375747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.375873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.375937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.380933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.381018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.381049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.386406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.386557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.386589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.391717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.391827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.391868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.397126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.397317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.397349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.402565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.402656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.402687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.407859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.407981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.408018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.413233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.413372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.413408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.418729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.418873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.418911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.424158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.424284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.424324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.429500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.429591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.429628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.434861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.434947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.434983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.440292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.440396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.440430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.445621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.445747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.445780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.450876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.450984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.992 [2024-07-22 17:15:21.451015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.992 [2024-07-22 17:15:21.456348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.992 [2024-07-22 17:15:21.456453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.456488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.461671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.461792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.461822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.466853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.466939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.466972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.472159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.472281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.472327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.477335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.477451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.477481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.482672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.482769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.482801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.487878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.488039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.488071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.493247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.493370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.493400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.498435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.498545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.498578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.503794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.503879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.503927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.508874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.508980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.509010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.514247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.514346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.514378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.519600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.519681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.519713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.525028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.525130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.525161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.530464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.530567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.530600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.536062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.536234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.536282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.541585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.541704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.541738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.547030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.547130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.547162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.552602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.552715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.552746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.558129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.558242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.558291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.563797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.563945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.563982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.568929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.569185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.569219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.574206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.574600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.574643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.579637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.580035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.580082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.585161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.585532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.585573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.590386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.590753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.590795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.595662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.993 [2024-07-22 17:15:21.596034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.993 [2024-07-22 17:15:21.596076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.993 [2024-07-22 17:15:21.601051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.994 [2024-07-22 17:15:21.601417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.994 [2024-07-22 17:15:21.601457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.994 [2024-07-22 17:15:21.606457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:19.994 [2024-07-22 17:15:21.606865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.994 [2024-07-22 17:15:21.606930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.611928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.612360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.612405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.617517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.617895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.622873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.623220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.623287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.628269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.628672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.628716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.633816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.634202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.634263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.639432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.639822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.639879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.644796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.644885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.644920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.650229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.650345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.650378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.655584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.655685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.655717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.661060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.661149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.661181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.666349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.666449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.254 [2024-07-22 17:15:21.666480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.254 [2024-07-22 17:15:21.671420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.254 [2024-07-22 17:15:21.671510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.671541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.676707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.676794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.676826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.682120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.682237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.682288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.687548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.687651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.687686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.693050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.693147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.693182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.698649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.698752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.698787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.703881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.703983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.704018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.709385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.709481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.709515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.714749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.714867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.714899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.719981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.720073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.720109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.725628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.725730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.725769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.731126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.731258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.731300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.736593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.736710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.736752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.742170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.742309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.742349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.747691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.747811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.747863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.753216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.753315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.753352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.758646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.758744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.758793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.763911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.763997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.764030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.769141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.769216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.769248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.774507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.774592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.774623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.779720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.779804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.779835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.784878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.784958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.784996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.790156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.790257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.790303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.795454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.795553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.795585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.800672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.800754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.800787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.806056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.806145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.806181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.811350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.255 [2024-07-22 17:15:21.811433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.255 [2024-07-22 17:15:21.811467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.255 [2024-07-22 17:15:21.816503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.816600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.816634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.821754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.821846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.821879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.826910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.826994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.827026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.832166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.832247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.832297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.837587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.837678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.837709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.842583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.842683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.842714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.847800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.847914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.847948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.853164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.853249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.853313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.858536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.858679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.863644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.863740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.863771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.256 [2024-07-22 17:15:21.868869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.256 [2024-07-22 17:15:21.868979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.256 [2024-07-22 17:15:21.869012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.874293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.874372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.874409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.879459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.879552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.879584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.884736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.884820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.884852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.890059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.890155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.895271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.895381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.895412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.900660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.900763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.900797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.906079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.906200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.906233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.911537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.911654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.911692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.916908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.917004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.917039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.922207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.922335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.922370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.927453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.927559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.927592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.932841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.932931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.932966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.938234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.938344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.938376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.943744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.943857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.943893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.949371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.949470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.949527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.954955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.955056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.955092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.960519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.960631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.960668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.516 [2024-07-22 17:15:21.966099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:20.516 [2024-07-22 17:15:21.966210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.516 [2024-07-22 17:15:21.966262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.516 00:37:20.516 Latency(us) 00:37:20.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:20.516 nvme0n1 : 2.00 5768.73 721.09 0.00 0.00 2768.00 1888.06 10922.67 00:37:20.516 =================================================================================================================== 00:37:20.516 Total : 5768.73 721.09 0.00 0.00 2768.00 1888.06 10922.67 00:37:20.516 0 00:37:20.516 17:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:20.516 17:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:20.516 17:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:20.516 | .driver_specific 00:37:20.516 | .nvme_error 00:37:20.516 | .status_code 00:37:20.516 | .command_transient_transport_error' 00:37:20.517 17:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88830 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 88830 ']' 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 88830 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88830 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:20.775 killing process with pid 88830 00:37:20.775 Received shutdown signal, test time was about 2.000000 seconds 00:37:20.775 00:37:20.775 Latency(us) 00:37:20.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.775 =================================================================================================================== 00:37:20.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88830' 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 88830 00:37:20.775 17:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 88830 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 88581 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 88581 ']' 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 88581 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88581 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:22.151 killing process with pid 88581 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88581' 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 88581 00:37:22.151 17:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 88581 00:37:24.054 00:37:24.054 real 0m25.253s 00:37:24.054 user 0m47.195s 00:37:24.054 sys 0m5.575s 00:37:24.054 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:24.054 ************************************ 00:37:24.054 END TEST nvmf_digest_error 00:37:24.054 ************************************ 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:24.055 rmmod nvme_tcp 00:37:24.055 rmmod nvme_fabrics 00:37:24.055 rmmod nvme_keyring 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:24.055 Process with pid 88581 is not found 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 88581 ']' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 88581 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 88581 ']' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 88581 00:37:24.055 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (88581) - No such process 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 88581 is not found' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:24.055 00:37:24.055 real 0m53.047s 00:37:24.055 user 1m37.712s 00:37:24.055 sys 0m11.669s 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.055 ************************************ 00:37:24.055 END TEST nvmf_digest 00:37:24.055 ************************************ 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.055 ************************************ 00:37:24.055 START TEST nvmf_host_multipath 00:37:24.055 ************************************ 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:37:24.055 * Looking for test storage... 00:37:24.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.055 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:24.056 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:24.315 Cannot find device "nvmf_tgt_br" 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:24.315 Cannot find device "nvmf_tgt_br2" 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:24.315 Cannot find device "nvmf_tgt_br" 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:24.315 Cannot find device "nvmf_tgt_br2" 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:24.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:24.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:24.315 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:24.316 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:24.316 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:24.316 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:24.574 17:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:24.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:37:24.574 00:37:24.574 --- 10.0.0.2 ping statistics --- 00:37:24.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.574 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:37:24.574 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:24.574 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:24.574 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:37:24.575 00:37:24.575 --- 10.0.0.3 ping statistics --- 00:37:24.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.575 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:24.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:37:24.575 00:37:24.575 --- 10.0.0.1 ping statistics --- 00:37:24.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.575 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=89123 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 89123 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 89123 ']' 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:24.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:24.575 17:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:24.575 [2024-07-22 17:15:26.153387] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:24.575 [2024-07-22 17:15:26.153525] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.834 [2024-07-22 17:15:26.332488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:25.137 [2024-07-22 17:15:26.661642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:25.137 [2024-07-22 17:15:26.661712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:25.137 [2024-07-22 17:15:26.661730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:25.137 [2024-07-22 17:15:26.661766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:25.137 [2024-07-22 17:15:26.661781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:25.137 [2024-07-22 17:15:26.661946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.137 [2024-07-22 17:15:26.661976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.396 [2024-07-22 17:15:26.943049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=89123 00:37:25.655 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:25.913 [2024-07-22 17:15:27.428600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.913 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:26.171 Malloc0 00:37:26.171 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:37:26.429 17:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:26.687 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:26.946 [2024-07-22 17:15:28.395403] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:26.946 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:27.204 [2024-07-22 17:15:28.607455] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=89173 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 89173 /var/tmp/bdevperf.sock 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 89173 ']' 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:27.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:27.204 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:27.205 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:27.205 17:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:28.139 17:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:28.139 17:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:37:28.139 17:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:37:28.397 17:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:37:28.963 Nvme0n1 00:37:28.963 17:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:29.221 Nvme0n1 00:37:29.221 17:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:37:29.221 17:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:37:30.202 17:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:37:30.202 17:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:30.461 17:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:30.719 17:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:37:30.719 17:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89220 00:37:30.719 17:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:30.719 17:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89123 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:37.311 Attaching 4 probes... 00:37:37.311 @path[10.0.0.2, 4421]: 15579 00:37:37.311 @path[10.0.0.2, 4421]: 17047 00:37:37.311 @path[10.0.0.2, 4421]: 15883 00:37:37.311 @path[10.0.0.2, 4421]: 16083 00:37:37.311 @path[10.0.0.2, 4421]: 16448 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89220 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:37.311 17:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:37.568 17:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:37:37.568 17:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89337 00:37:37.568 17:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89123 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:37.568 17:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:44.123 Attaching 4 probes... 00:37:44.123 @path[10.0.0.2, 4420]: 13877 00:37:44.123 @path[10.0.0.2, 4420]: 16326 00:37:44.123 @path[10.0.0.2, 4420]: 13571 00:37:44.123 @path[10.0.0.2, 4420]: 11518 00:37:44.123 @path[10.0.0.2, 4420]: 13898 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89337 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:37:44.123 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:44.382 17:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:44.639 17:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:37:44.639 17:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89123 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:44.639 17:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89445 00:37:44.639 17:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:51.197 Attaching 4 probes... 00:37:51.197 @path[10.0.0.2, 4421]: 11842 00:37:51.197 @path[10.0.0.2, 4421]: 13648 00:37:51.197 @path[10.0.0.2, 4421]: 14522 00:37:51.197 @path[10.0.0.2, 4421]: 15860 00:37:51.197 @path[10.0.0.2, 4421]: 15751 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89445 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:37:51.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:51.456 17:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:51.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:37:51.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89123 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:51.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89558 00:37:51.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:58.276 Attaching 4 probes... 00:37:58.276 00:37:58.276 00:37:58.276 00:37:58.276 00:37:58.276 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89558 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89671 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:58.276 17:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89123 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:38:04.867 17:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:38:04.867 17:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:04.867 Attaching 4 probes... 00:38:04.867 @path[10.0.0.2, 4421]: 16450 00:38:04.867 @path[10.0.0.2, 4421]: 16943 00:38:04.867 @path[10.0.0.2, 4421]: 17376 00:38:04.867 @path[10.0.0.2, 4421]: 17059 00:38:04.867 @path[10.0.0.2, 4421]: 16977 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89671 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:04.867 17:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:38:05.800 17:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:38:05.800 17:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89788 00:38:05.800 17:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:38:05.801 17:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89123 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:12.405 Attaching 4 probes... 00:38:12.405 @path[10.0.0.2, 4420]: 16896 00:38:12.405 @path[10.0.0.2, 4420]: 17042 00:38:12.405 @path[10.0.0.2, 4420]: 16549 00:38:12.405 @path[10.0.0.2, 4420]: 16456 00:38:12.405 @path[10.0.0.2, 4420]: 16204 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89788 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:12.405 [2024-07-22 17:16:13.899987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:12.405 17:16:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:12.663 17:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:38:19.217 17:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:38:19.217 17:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89964 00:38:19.217 17:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:38:19.217 17:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89123 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:25.779 Attaching 4 probes... 00:38:25.779 @path[10.0.0.2, 4421]: 16431 00:38:25.779 @path[10.0.0.2, 4421]: 17285 00:38:25.779 @path[10.0.0.2, 4421]: 17052 00:38:25.779 @path[10.0.0.2, 4421]: 16272 00:38:25.779 @path[10.0.0.2, 4421]: 16289 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89964 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 89173 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 89173 ']' 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 89173 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89173 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:38:25.779 killing process with pid 89173 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89173' 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 89173 00:38:25.779 17:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 89173 00:38:25.779 Connection closed with partial response: 00:38:25.779 00:38:25.779 00:38:26.356 17:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 89173 00:38:26.356 17:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:26.357 [2024-07-22 17:15:28.748551] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:26.357 [2024-07-22 17:15:28.748782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89173 ] 00:38:26.357 [2024-07-22 17:15:28.935025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.357 [2024-07-22 17:15:29.248285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:26.357 [2024-07-22 17:15:29.539490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:38:26.357 Running I/O for 90 seconds... 00:38:26.357 [2024-07-22 17:15:39.141377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.141952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.141980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.142010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.142066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.142134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.142181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.142228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.142287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.142980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.142999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.143028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.357 [2024-07-22 17:15:39.143048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.143085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.143115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.143145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.143164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.143193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.143213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:26.357 [2024-07-22 17:15:39.143263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.357 [2024-07-22 17:15:39.143284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.143916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.143963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.143988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.144887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.144957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.144984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.145046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.145115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.145179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.145238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.145322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.145389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.358 [2024-07-22 17:15:39.145467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.145530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.145589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:26.358 [2024-07-22 17:15:39.145626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.358 [2024-07-22 17:15:39.145650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.145684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.145712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.145747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.145771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.145807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.145831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.145866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.145890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.145926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.145951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.145985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.146475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.146933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.146958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.147002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.147027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.147061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.147091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.147132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.147157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.147194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.147219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.147265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.147291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.147327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.147351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.147388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.147413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.149383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.359 [2024-07-22 17:15:39.149452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.149503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.149529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.149563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.149588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.149623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.359 [2024-07-22 17:15:39.149647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:26.359 [2024-07-22 17:15:39.149681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.149705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.149754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.149779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.149814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.149855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.149890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.149914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.149967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.149993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:39.150959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:39.150985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.360 [2024-07-22 17:15:45.772716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.772764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.772812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.772861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.772908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.772966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.772995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.773014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.773041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.773060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.773088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.773107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.773134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.773153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.773189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.360 [2024-07-22 17:15:45.773208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:26.360 [2024-07-22 17:15:45.773235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.773317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.773363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.773410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.773457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.773505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.773978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.773998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.361 [2024-07-22 17:15:45.774390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.774437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.774492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.774539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.774587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.774635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.361 [2024-07-22 17:15:45.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:26.361 [2024-07-22 17:15:45.774726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.774746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.774775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.774795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.774839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.774860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.774888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.774908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.774936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.774955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.774983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.775647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.775696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.775752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.775896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.775945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.775974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.775995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.362 [2024-07-22 17:15:45.776561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.776616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.776664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.776720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.776769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:26.362 [2024-07-22 17:15:45.776798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.362 [2024-07-22 17:15:45.776818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.776846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.776866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.776895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.776915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.776944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.776964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.776993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.777013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.777062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.777122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.777171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.777901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.777921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.778988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.363 [2024-07-22 17:15:45.779028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:26.363 [2024-07-22 17:15:45.779830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.363 [2024-07-22 17:15:45.779863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:45.779903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:45.779922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:45.779960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:45.779980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:45.780021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:45.780042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:45.780079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:45.780098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:45.780136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:45.780156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.146690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.146823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.146983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.147919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.147978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.148922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.148983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.149028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.149131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.149235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.149362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.364 [2024-07-22 17:15:53.149466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.149569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.149691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.149791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.149897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.149962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.150003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.150063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.150105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.150164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.150208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.150288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.150332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.150388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.150429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:26.364 [2024-07-22 17:15:53.150483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.364 [2024-07-22 17:15:53.150525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.150582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.150627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.150686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.150730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.150789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.150833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.150893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.150952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.151060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.151164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.151355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.151466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.151570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.151680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.151782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.151906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.151968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.152011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.365 [2024-07-22 17:15:53.152142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.152264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.152388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.152491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.152595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.152698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.152799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.152903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.152960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.153943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.153986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.154090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.154194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.154328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.154436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.154536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.154639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.365 [2024-07-22 17:15:53.154742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:26.365 [2024-07-22 17:15:53.154800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.154843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.154902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.154947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.155068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.155172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.155298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.155405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.155511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.155614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.155715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.155820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.155941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.155999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.156044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.156146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.156266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.156391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.366 [2024-07-22 17:15:53.156495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.156601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.156708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.156814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.156916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.156974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.157952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.157996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.158056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.158100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.158158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.366 [2024-07-22 17:15:53.158202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:26.366 [2024-07-22 17:15:53.158285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.158332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.158393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.158436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.158495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.158538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.158597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.158643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.158703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.158746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.158805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.158850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.158909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.158971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.159102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.159208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.159335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.159443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.159546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.159650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.159754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.159814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.159873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.161106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:15:53.161172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.161297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.161348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.161426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.161470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.161545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.161606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.161682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.161725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.161801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.161843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.161916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.161962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.162038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.162082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:15:53.162186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:15:53.162236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.347651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.347732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.347807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.347829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.347866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.347902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.347931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.347950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.347978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.347997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.348043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.348090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.367 [2024-07-22 17:16:06.348162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:16:06.348209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:16:06.348255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:16:06.348317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:16:06.348364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:16:06.348410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:16:06.348456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:26.367 [2024-07-22 17:16:06.348482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.367 [2024-07-22 17:16:06.348500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.348923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.348981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.368 [2024-07-22 17:16:06.349897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.349982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.349998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.350015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.350031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.350049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.350065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.350114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.350130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.368 [2024-07-22 17:16:06.350148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.368 [2024-07-22 17:16:06.350164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.350915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.350979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.350997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.369 [2024-07-22 17:16:06.351326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.351359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.351392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.351424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.351456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.351488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.351521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.369 [2024-07-22 17:16:06.351538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.369 [2024-07-22 17:16:06.351553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.351976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.351994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.352036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.352074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.352112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.370 [2024-07-22 17:16:06.352151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:38:26.370 [2024-07-22 17:16:06.352217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2056 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2064 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2072 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2088 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2096 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2104 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.370 [2024-07-22 17:16:06.352821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.370 [2024-07-22 17:16:06.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2120 len:8 PRP1 0x0 PRP2 0x0 00:38:26.370 [2024-07-22 17:16:06.352852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.370 [2024-07-22 17:16:06.352869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.371 [2024-07-22 17:16:06.352884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.371 [2024-07-22 17:16:06.352898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2128 len:8 PRP1 0x0 PRP2 0x0 00:38:26.371 [2024-07-22 17:16:06.352915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.352932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.371 [2024-07-22 17:16:06.352946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.371 [2024-07-22 17:16:06.352960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2136 len:8 PRP1 0x0 PRP2 0x0 00:38:26.371 [2024-07-22 17:16:06.352978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.352995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.371 [2024-07-22 17:16:06.353018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.371 [2024-07-22 17:16:06.353031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:8 PRP1 0x0 PRP2 0x0 00:38:26.371 [2024-07-22 17:16:06.353048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.371 [2024-07-22 17:16:06.353081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.371 [2024-07-22 17:16:06.353094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2152 len:8 PRP1 0x0 PRP2 0x0 00:38:26.371 [2024-07-22 17:16:06.353110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.371 [2024-07-22 17:16:06.353139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.371 [2024-07-22 17:16:06.353152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2160 len:8 PRP1 0x0 PRP2 0x0 00:38:26.371 [2024-07-22 17:16:06.353168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.371 [2024-07-22 17:16:06.353196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.371 [2024-07-22 17:16:06.353209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2168 len:8 PRP1 0x0 PRP2 0x0 00:38:26.371 [2024-07-22 17:16:06.353226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:26.371 [2024-07-22 17:16:06.353254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:26.371 [2024-07-22 17:16:06.353275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:8 PRP1 0x0 PRP2 0x0 00:38:26.371 [2024-07-22 17:16:06.353292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353606] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:38:26.371 [2024-07-22 17:16:06.353781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.371 [2024-07-22 17:16:06.353812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.371 [2024-07-22 17:16:06.353850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.371 [2024-07-22 17:16:06.353889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.371 [2024-07-22 17:16:06.353924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.371 [2024-07-22 17:16:06.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.371 [2024-07-22 17:16:06.353987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:38:26.371 [2024-07-22 17:16:06.355212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.371 [2024-07-22 17:16:06.355286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:38:26.371 [2024-07-22 17:16:06.355762] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-07-22 17:16:06.355795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:38:26.371 [2024-07-22 17:16:06.355815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:38:26.371 [2024-07-22 17:16:06.355907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:38:26.371 [2024-07-22 17:16:06.355963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.371 [2024-07-22 17:16:06.355990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.371 [2024-07-22 17:16:06.356011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.371 [2024-07-22 17:16:06.356064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.371 [2024-07-22 17:16:06.356084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.371 [2024-07-22 17:16:16.426808] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:26.371 Received shutdown signal, test time was about 55.643002 seconds 00:38:26.371 00:38:26.371 Latency(us) 00:38:26.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.371 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:26.371 Verification LBA range: start 0x0 length 0x4000 00:38:26.371 Nvme0n1 : 55.64 6766.53 26.43 0.00 0.00 18884.36 236.98 7030452.42 00:38:26.371 =================================================================================================================== 00:38:26.371 Total : 6766.53 26.43 0.00 0.00 18884.36 236.98 7030452.42 00:38:26.371 17:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:26.630 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:26.631 rmmod nvme_tcp 00:38:26.631 rmmod nvme_fabrics 00:38:26.631 rmmod nvme_keyring 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 89123 ']' 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 89123 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 89123 ']' 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 89123 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89123 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:26.631 killing process with pid 89123 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89123' 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 89123 00:38:26.631 17:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 89123 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:28.534 00:38:28.534 real 1m4.352s 00:38:28.534 user 2m54.557s 00:38:28.534 sys 0m21.080s 00:38:28.534 ************************************ 00:38:28.534 END TEST nvmf_host_multipath 00:38:28.534 ************************************ 00:38:28.534 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:28.535 17:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:28.535 17:16:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:38:28.535 17:16:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:38:28.535 17:16:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:28.535 17:16:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:28.535 17:16:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.535 ************************************ 00:38:28.535 START TEST nvmf_timeout 00:38:28.535 ************************************ 00:38:28.535 17:16:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:38:28.535 * Looking for test storage... 00:38:28.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:28.535 Cannot find device "nvmf_tgt_br" 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:28.535 Cannot find device "nvmf_tgt_br2" 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:28.535 Cannot find device "nvmf_tgt_br" 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:38:28.535 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:28.794 Cannot find device "nvmf_tgt_br2" 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:28.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:28.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:28.794 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:29.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:29.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:38:29.052 00:38:29.052 --- 10.0.0.2 ping statistics --- 00:38:29.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.052 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:29.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:29.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:38:29.052 00:38:29.052 --- 10.0.0.3 ping statistics --- 00:38:29.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.052 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:29.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:29.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:38:29.052 00:38:29.052 --- 10.0.0.1 ping statistics --- 00:38:29.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.052 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:29.052 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=90296 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 90296 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 90296 ']' 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:38:29.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:29.053 17:16:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:29.053 [2024-07-22 17:16:30.595495] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:29.053 [2024-07-22 17:16:30.595673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:29.311 [2024-07-22 17:16:30.786775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:29.573 [2024-07-22 17:16:31.115110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:29.573 [2024-07-22 17:16:31.115172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:29.573 [2024-07-22 17:16:31.115187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:29.573 [2024-07-22 17:16:31.115203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:29.573 [2024-07-22 17:16:31.115215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:29.573 [2024-07-22 17:16:31.115383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.573 [2024-07-22 17:16:31.115524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.832 [2024-07-22 17:16:31.377140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:30.091 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:30.349 [2024-07-22 17:16:31.840077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.349 17:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:38:30.607 Malloc0 00:38:30.607 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:30.866 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:31.126 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:31.385 [2024-07-22 17:16:32.910327] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=90345 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 90345 /var/tmp/bdevperf.sock 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 90345 ']' 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:31.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:31.385 17:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:38:31.643 [2024-07-22 17:16:33.012965] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:31.643 [2024-07-22 17:16:33.013114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90345 ] 00:38:31.643 [2024-07-22 17:16:33.185875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.910 [2024-07-22 17:16:33.461563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:32.170 [2024-07-22 17:16:33.738981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:38:32.428 17:16:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:32.428 17:16:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:38:32.428 17:16:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:38:32.687 17:16:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:38:32.946 NVMe0n1 00:38:32.946 17:16:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=90369 00:38:32.946 17:16:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:32.946 17:16:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:38:33.204 Running I/O for 10 seconds... 00:38:34.159 17:16:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.420 [2024-07-22 17:16:35.812825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.420 [2024-07-22 17:16:35.812905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.812927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.420 [2024-07-22 17:16:35.812945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.812961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.420 [2024-07-22 17:16:35.812982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.812998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.420 [2024-07-22 17:16:35.813015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:34.420 [2024-07-22 17:16:35.813332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.420 [2024-07-22 17:16:35.813364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.813981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.813994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.420 [2024-07-22 17:16:35.814327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.420 [2024-07-22 17:16:35.814342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.814970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.814989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.421 [2024-07-22 17:16:35.815697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.421 [2024-07-22 17:16:35.815710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.815977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.815991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.816977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.816991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.817022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.817036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.817055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.422 [2024-07-22 17:16:35.817069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.422 [2024-07-22 17:16:35.817088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.423 [2024-07-22 17:16:35.817102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.423 [2024-07-22 17:16:35.817147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.423 [2024-07-22 17:16:35.817180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.423 [2024-07-22 17:16:35.817218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:34.423 [2024-07-22 17:16:35.817733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.423 [2024-07-22 17:16:35.817769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.817790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:38:34.423 [2024-07-22 17:16:35.817810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:34.423 [2024-07-22 17:16:35.817826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:34.423 [2024-07-22 17:16:35.817840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57496 len:8 PRP1 0x0 PRP2 0x0 00:38:34.423 [2024-07-22 17:16:35.817857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.423 [2024-07-22 17:16:35.818191] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:38:34.423 [2024-07-22 17:16:35.818488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:34.423 [2024-07-22 17:16:35.818531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:34.423 [2024-07-22 17:16:35.818656] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.423 [2024-07-22 17:16:35.818685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:34.423 [2024-07-22 17:16:35.818702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:34.423 [2024-07-22 17:16:35.818730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:34.423 [2024-07-22 17:16:35.818752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:34.423 [2024-07-22 17:16:35.818779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:34.423 [2024-07-22 17:16:35.818796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:34.423 [2024-07-22 17:16:35.818828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:34.423 [2024-07-22 17:16:35.818843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:34.423 17:16:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:38:36.337 [2024-07-22 17:16:37.819188] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.337 [2024-07-22 17:16:37.819309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:36.337 [2024-07-22 17:16:37.819340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:36.337 [2024-07-22 17:16:37.819390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:36.337 [2024-07-22 17:16:37.819446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:36.337 [2024-07-22 17:16:37.819471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:36.337 [2024-07-22 17:16:37.819494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:36.337 [2024-07-22 17:16:37.819548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:36.337 [2024-07-22 17:16:37.819569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:36.337 17:16:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:38:36.337 17:16:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:36.337 17:16:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:38:36.594 17:16:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:38:36.595 17:16:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:38:36.595 17:16:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:38:36.595 17:16:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:38:37.160 17:16:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:38:37.160 17:16:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:38:38.535 [2024-07-22 17:16:39.819801] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.535 [2024-07-22 17:16:39.819901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:38.535 [2024-07-22 17:16:39.819924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:38.535 [2024-07-22 17:16:39.819965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:38.535 [2024-07-22 17:16:39.819990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.535 [2024-07-22 17:16:39.820008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.535 [2024-07-22 17:16:39.820024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.535 [2024-07-22 17:16:39.820064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.535 [2024-07-22 17:16:39.820080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.473 [2024-07-22 17:16:41.820183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.473 [2024-07-22 17:16:41.820279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.473 [2024-07-22 17:16:41.820301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.473 [2024-07-22 17:16:41.820317] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:38:40.473 [2024-07-22 17:16:41.820361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.408 00:38:41.408 Latency(us) 00:38:41.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.408 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:41.408 Verification LBA range: start 0x0 length 0x4000 00:38:41.408 NVMe0n1 : 8.15 866.42 3.38 15.71 0.00 145111.04 4306.65 7030452.42 00:38:41.408 =================================================================================================================== 00:38:41.408 Total : 866.42 3.38 15.71 0.00 145111.04 4306.65 7030452.42 00:38:41.408 0 00:38:41.974 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:38:41.974 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:41.974 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:38:42.232 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:38:42.232 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:38:42.232 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:38:42.232 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 90369 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 90345 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 90345 ']' 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 90345 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90345 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:38:42.491 killing process with pid 90345 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90345' 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 90345 00:38:42.491 Received shutdown signal, test time was about 9.290538 seconds 00:38:42.491 00:38:42.491 Latency(us) 00:38:42.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.491 =================================================================================================================== 00:38:42.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:42.491 17:16:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 90345 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.394 [2024-07-22 17:16:45.765819] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=90503 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 90503 /var/tmp/bdevperf.sock 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 90503 ']' 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:44.394 17:16:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:44.394 [2024-07-22 17:16:45.899832] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:44.394 [2024-07-22 17:16:45.899988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90503 ] 00:38:44.653 [2024-07-22 17:16:46.071347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.911 [2024-07-22 17:16:46.361659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:45.170 [2024-07-22 17:16:46.638660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:38:45.429 17:16:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:45.429 17:16:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:38:45.429 17:16:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:38:45.687 17:16:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:38:45.946 NVMe0n1 00:38:45.946 17:16:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=90521 00:38:45.946 17:16:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:45.946 17:16:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:38:46.205 Running I/O for 10 seconds... 00:38:47.138 17:16:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:47.454 [2024-07-22 17:16:48.777877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.777940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.777956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.777967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.777982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.777992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.778006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.778016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.778029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.778040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.778056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.778066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.454 [2024-07-22 17:16:48.778079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.778993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.779003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:38:47.455 [2024-07-22 17:16:48.779079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.455 [2024-07-22 17:16:48.779123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.779982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.779998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.456 [2024-07-22 17:16:48.780231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.456 [2024-07-22 17:16:48.780246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.780983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.780998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.457 [2024-07-22 17:16:48.781469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.457 [2024-07-22 17:16:48.781484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.781968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.781983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.781999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.782499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.458 [2024-07-22 17:16:48.782529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.458 [2024-07-22 17:16:48.782632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.458 [2024-07-22 17:16:48.782658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.782981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.782998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.783013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.783028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.783042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.783059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.783073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:47.459 [2024-07-22 17:16:48.783089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.783102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:38:47.459 [2024-07-22 17:16:48.783122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:47.459 [2024-07-22 17:16:48.783137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:47.459 [2024-07-22 17:16:48.783153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62896 len:8 PRP1 0x0 PRP2 0x0 00:38:47.459 [2024-07-22 17:16:48.783167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.459 [2024-07-22 17:16:48.783488] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:38:47.459 [2024-07-22 17:16:48.783759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:47.459 [2024-07-22 17:16:48.783871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:47.459 [2024-07-22 17:16:48.784000] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:47.459 [2024-07-22 17:16:48.784023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:47.459 [2024-07-22 17:16:48.784042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:47.459 [2024-07-22 17:16:48.784065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:47.459 [2024-07-22 17:16:48.784088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:47.459 [2024-07-22 17:16:48.784101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:47.459 [2024-07-22 17:16:48.784117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:47.459 [2024-07-22 17:16:48.784143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:47.459 [2024-07-22 17:16:48.784162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:47.459 17:16:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:38:48.402 [2024-07-22 17:16:49.784369] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.402 [2024-07-22 17:16:49.784449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:48.402 [2024-07-22 17:16:49.784474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:48.402 [2024-07-22 17:16:49.784514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:48.402 [2024-07-22 17:16:49.784542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.402 [2024-07-22 17:16:49.784556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.402 [2024-07-22 17:16:49.784579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.402 [2024-07-22 17:16:49.784615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.402 [2024-07-22 17:16:49.784633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.402 17:16:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.402 [2024-07-22 17:16:50.001527] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.660 17:16:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 90521 00:38:49.226 [2024-07-22 17:16:50.802252] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:57.356 00:38:57.356 Latency(us) 00:38:57.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.356 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:57.356 Verification LBA range: start 0x0 length 0x4000 00:38:57.356 NVMe0n1 : 10.01 6100.02 23.83 0.00 0.00 20948.05 1552.58 3035877.18 00:38:57.356 =================================================================================================================== 00:38:57.356 Total : 6100.02 23.83 0.00 0.00 20948.05 1552.58 3035877.18 00:38:57.356 0 00:38:57.356 17:16:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=90626 00:38:57.356 17:16:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:57.356 17:16:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:38:57.356 Running I/O for 10 seconds... 00:38:57.356 17:16:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:57.356 [2024-07-22 17:16:58.891489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.356 [2024-07-22 17:16:58.891758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.356 [2024-07-22 17:16:58.891783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.356 [2024-07-22 17:16:58.891808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.356 [2024-07-22 17:16:58.891833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.356 [2024-07-22 17:16:58.891858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.356 [2024-07-22 17:16:58.891909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.356 [2024-07-22 17:16:58.891936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.356 [2024-07-22 17:16:58.891951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.891964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.891978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.891991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.357 [2024-07-22 17:16:58.892698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.892983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.892996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.893010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.893022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.893037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.893049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.893064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.357 [2024-07-22 17:16:58.893076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.357 [2024-07-22 17:16:58.893090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.893360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.893983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.893995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.894009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.358 [2024-07-22 17:16:58.894021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.894034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.894046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.894060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.894071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.894085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.894111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.894123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.894136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.894148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.358 [2024-07-22 17:16:58.894162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.358 [2024-07-22 17:16:58.894173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.359 [2024-07-22 17:16:58.894848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.894986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.894997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.895011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.359 [2024-07-22 17:16:58.895022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.895035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:38:57.359 [2024-07-22 17:16:58.895050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:57.359 [2024-07-22 17:16:58.895065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:57.359 [2024-07-22 17:16:58.895076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:38:57.359 [2024-07-22 17:16:58.895089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.359 [2024-07-22 17:16:58.895392] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:38:57.359 [2024-07-22 17:16:58.895605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.359 [2024-07-22 17:16:58.895696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:57.359 [2024-07-22 17:16:58.895800] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.359 [2024-07-22 17:16:58.895820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:57.359 [2024-07-22 17:16:58.895847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:57.359 [2024-07-22 17:16:58.895884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:57.359 [2024-07-22 17:16:58.895921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.359 [2024-07-22 17:16:58.895934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.359 [2024-07-22 17:16:58.895948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.360 [2024-07-22 17:16:58.895974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.360 [2024-07-22 17:16:58.895988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.360 17:16:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:38:58.295 [2024-07-22 17:16:59.896173] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.295 [2024-07-22 17:16:59.896266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:58.295 [2024-07-22 17:16:59.896288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:58.295 [2024-07-22 17:16:59.896326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:58.295 [2024-07-22 17:16:59.896350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.295 [2024-07-22 17:16:59.896364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.295 [2024-07-22 17:16:59.896386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.295 [2024-07-22 17:16:59.896421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.295 [2024-07-22 17:16:59.896436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.669 [2024-07-22 17:17:00.896623] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.669 [2024-07-22 17:17:00.896710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:38:59.669 [2024-07-22 17:17:00.896731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:38:59.669 [2024-07-22 17:17:00.896768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:38:59.669 [2024-07-22 17:17:00.896794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.669 [2024-07-22 17:17:00.896807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.669 [2024-07-22 17:17:00.896823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.669 [2024-07-22 17:17:00.896858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.669 [2024-07-22 17:17:00.896874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.602 [2024-07-22 17:17:01.899947] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.602 [2024-07-22 17:17:01.900028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:39:00.602 [2024-07-22 17:17:01.900067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:39:00.602 [2024-07-22 17:17:01.900341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:39:00.602 [2024-07-22 17:17:01.900583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.602 [2024-07-22 17:17:01.900598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.602 [2024-07-22 17:17:01.900614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.602 [2024-07-22 17:17:01.904079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.602 [2024-07-22 17:17:01.904123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.602 17:17:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:00.602 [2024-07-22 17:17:02.123173] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:00.602 17:17:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 90626 00:39:01.536 [2024-07-22 17:17:02.938516] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:06.810 00:39:06.810 Latency(us) 00:39:06.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.810 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:39:06.810 Verification LBA range: start 0x0 length 0x4000 00:39:06.810 NVMe0n1 : 10.01 5427.69 21.20 4061.30 0.00 13461.19 651.46 3019898.88 00:39:06.810 =================================================================================================================== 00:39:06.810 Total : 5427.69 21.20 4061.30 0.00 13461.19 0.00 3019898.88 00:39:06.810 0 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 90503 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 90503 ']' 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 90503 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90503 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:39:06.810 killing process with pid 90503 00:39:06.810 Received shutdown signal, test time was about 10.000000 seconds 00:39:06.810 00:39:06.810 Latency(us) 00:39:06.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.810 =================================================================================================================== 00:39:06.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90503' 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 90503 00:39:06.810 17:17:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 90503 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=90747 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 90747 /var/tmp/bdevperf.sock 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 90747 ']' 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:07.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:07.751 17:17:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:39:07.751 [2024-07-22 17:17:09.215728] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:07.752 [2024-07-22 17:17:09.215914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90747 ] 00:39:08.015 [2024-07-22 17:17:09.412470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.283 [2024-07-22 17:17:09.719244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.549 [2024-07-22 17:17:09.988386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:39:08.549 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:08.549 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:39:08.549 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90747 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:39:08.549 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=90762 00:39:08.549 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:39:08.818 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:39:09.089 NVMe0n1 00:39:09.363 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=90805 00:39:09.363 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:39:09.363 17:17:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:09.363 Running I/O for 10 seconds... 00:39:10.302 17:17:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:10.563 [2024-07-22 17:17:11.978559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.978970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.978988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.563 [2024-07-22 17:17:11.979519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.563 [2024-07-22 17:17:11.979536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.979978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.979990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.564 [2024-07-22 17:17:11.980616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.564 [2024-07-22 17:17:11.980628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.980982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.980995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.565 [2024-07-22 17:17:11.981666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.565 [2024-07-22 17:17:11.981679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.981971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.981983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.566 [2024-07-22 17:17:11.982588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.566 [2024-07-22 17:17:11.982604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:39:10.566 [2024-07-22 17:17:11.982623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:10.567 [2024-07-22 17:17:11.982642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:10.567 [2024-07-22 17:17:11.982654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129208 len:8 PRP1 0x0 PRP2 0x0 00:39:10.567 [2024-07-22 17:17:11.982670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:10.567 [2024-07-22 17:17:11.983019] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:39:10.567 [2024-07-22 17:17:11.983303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:10.567 [2024-07-22 17:17:11.983406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:39:10.567 [2024-07-22 17:17:11.983560] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:39:10.567 [2024-07-22 17:17:11.983592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:39:10.567 [2024-07-22 17:17:11.983607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:39:10.567 [2024-07-22 17:17:11.983639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:39:10.567 [2024-07-22 17:17:11.983659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:10.567 [2024-07-22 17:17:11.983677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:10.567 [2024-07-22 17:17:11.983692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:10.567 [2024-07-22 17:17:11.983725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:10.567 [2024-07-22 17:17:11.983738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:10.567 17:17:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 90805 00:39:12.467 [2024-07-22 17:17:13.983984] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:39:12.467 [2024-07-22 17:17:13.984062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:39:12.467 [2024-07-22 17:17:13.984085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:39:12.467 [2024-07-22 17:17:13.984125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:39:12.467 [2024-07-22 17:17:13.984164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:12.467 [2024-07-22 17:17:13.984183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:12.467 [2024-07-22 17:17:13.984199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:12.467 [2024-07-22 17:17:13.984241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:12.467 [2024-07-22 17:17:13.984268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:14.374 [2024-07-22 17:17:15.984487] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.374 [2024-07-22 17:17:15.984574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:39:14.374 [2024-07-22 17:17:15.984598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:39:14.374 [2024-07-22 17:17:15.984637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:39:14.374 [2024-07-22 17:17:15.984663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:14.374 [2024-07-22 17:17:15.984681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:14.374 [2024-07-22 17:17:15.984698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:14.374 [2024-07-22 17:17:15.984742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:14.374 [2024-07-22 17:17:15.984758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:16.905 [2024-07-22 17:17:17.984867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:16.905 [2024-07-22 17:17:17.984941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:16.905 [2024-07-22 17:17:17.984960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:16.905 [2024-07-22 17:17:17.984976] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:39:16.905 [2024-07-22 17:17:17.985033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:17.473 00:39:17.473 Latency(us) 00:39:17.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.473 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:39:17.473 NVMe0n1 : 8.18 2017.85 7.88 15.64 0.00 62987.86 8051.57 7030452.42 00:39:17.473 =================================================================================================================== 00:39:17.473 Total : 2017.85 7.88 15.64 0.00 62987.86 8051.57 7030452.42 00:39:17.473 0 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:39:17.473 Attaching 5 probes... 00:39:17.473 1229.220835: reset bdev controller NVMe0 00:39:17.473 1229.393825: reconnect bdev controller NVMe0 00:39:17.473 3229.735123: reconnect delay bdev controller NVMe0 00:39:17.473 3229.761083: reconnect bdev controller NVMe0 00:39:17.473 5230.261198: reconnect delay bdev controller NVMe0 00:39:17.473 5230.287386: reconnect bdev controller NVMe0 00:39:17.473 7230.753609: reconnect delay bdev controller NVMe0 00:39:17.473 7230.779417: reconnect bdev controller NVMe0 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 90762 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 90747 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 90747 ']' 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 90747 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90747 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:39:17.473 killing process with pid 90747 00:39:17.473 Received shutdown signal, test time was about 8.254665 seconds 00:39:17.473 00:39:17.473 Latency(us) 00:39:17.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.473 =================================================================================================================== 00:39:17.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90747' 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 90747 00:39:17.473 17:17:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 90747 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:19.376 rmmod nvme_tcp 00:39:19.376 rmmod nvme_fabrics 00:39:19.376 rmmod nvme_keyring 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 90296 ']' 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 90296 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 90296 ']' 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 90296 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90296 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:19.376 killing process with pid 90296 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90296' 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 90296 00:39:19.376 17:17:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 90296 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:21.281 00:39:21.281 real 0m52.723s 00:39:21.281 user 2m31.336s 00:39:21.281 sys 0m6.924s 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:21.281 ************************************ 00:39:21.281 END TEST nvmf_timeout 00:39:21.281 ************************************ 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:21.281 00:39:21.281 real 6m41.748s 00:39:21.281 user 18m15.657s 00:39:21.281 sys 1m34.329s 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:21.281 17:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.281 ************************************ 00:39:21.281 END TEST nvmf_host 00:39:21.281 ************************************ 00:39:21.281 17:17:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:39:21.281 00:39:21.281 real 17m5.018s 00:39:21.281 user 43m58.352s 00:39:21.281 sys 4m43.024s 00:39:21.281 17:17:22 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:21.281 17:17:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:21.281 ************************************ 00:39:21.282 END TEST nvmf_tcp 00:39:21.282 ************************************ 00:39:21.282 17:17:22 -- common/autotest_common.sh@1142 -- # return 0 00:39:21.282 17:17:22 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:39:21.282 17:17:22 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:39:21.282 17:17:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:21.282 17:17:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:21.282 17:17:22 -- common/autotest_common.sh@10 -- # set +x 00:39:21.282 ************************************ 00:39:21.282 START TEST nvmf_dif 00:39:21.282 ************************************ 00:39:21.282 17:17:22 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:39:21.547 * Looking for test storage... 00:39:21.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:21.547 17:17:22 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:21.547 17:17:22 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.547 17:17:22 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.547 17:17:22 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.547 17:17:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.547 17:17:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.547 17:17:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.547 17:17:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:21.547 17:17:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:21.547 17:17:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:21.547 17:17:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:21.547 17:17:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:21.547 17:17:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:21.547 17:17:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:21.547 17:17:22 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.548 17:17:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:21.548 17:17:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:21.548 Cannot find device "nvmf_tgt_br" 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@155 -- # true 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:21.548 Cannot find device "nvmf_tgt_br2" 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@156 -- # true 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:21.548 17:17:22 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:21.548 Cannot find device "nvmf_tgt_br" 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@158 -- # true 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:21.548 Cannot find device "nvmf_tgt_br2" 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@159 -- # true 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:21.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@162 -- # true 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:21.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@163 -- # true 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:21.548 17:17:23 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:21.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:21.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:39:21.804 00:39:21.804 --- 10.0.0.2 ping statistics --- 00:39:21.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.804 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:21.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:21.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:39:21.804 00:39:21.804 --- 10.0.0.3 ping statistics --- 00:39:21.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.804 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:21.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:21.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:39:21.804 00:39:21.804 --- 10.0.0.1 ping statistics --- 00:39:21.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.804 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:21.804 17:17:23 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:22.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:22.061 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:22.061 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:22.061 17:17:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:22.061 17:17:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:22.061 17:17:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:22.061 17:17:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:22.061 17:17:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:22.061 17:17:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:22.320 17:17:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:22.320 17:17:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:22.320 17:17:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:22.320 17:17:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=91268 00:39:22.320 17:17:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:22.320 17:17:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 91268 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 91268 ']' 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:22.320 17:17:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:22.320 [2024-07-22 17:17:23.832146] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:22.320 [2024-07-22 17:17:23.832336] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.579 [2024-07-22 17:17:24.018666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.838 [2024-07-22 17:17:24.309879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:22.838 [2024-07-22 17:17:24.309941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:22.838 [2024-07-22 17:17:24.309973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:22.838 [2024-07-22 17:17:24.309989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:22.838 [2024-07-22 17:17:24.310001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:22.838 [2024-07-22 17:17:24.310056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.097 [2024-07-22 17:17:24.597143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:39:23.356 17:17:24 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:39:23.357 17:17:24 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:23.357 17:17:24 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:23.357 17:17:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:23.357 17:17:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:23.357 [2024-07-22 17:17:24.820096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.357 17:17:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:23.357 17:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:23.357 ************************************ 00:39:23.357 START TEST fio_dif_1_default 00:39:23.357 ************************************ 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:23.357 bdev_null0 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:23.357 [2024-07-22 17:17:24.868323] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:23.357 { 00:39:23.357 "params": { 00:39:23.357 "name": "Nvme$subsystem", 00:39:23.357 "trtype": "$TEST_TRANSPORT", 00:39:23.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.357 "adrfam": "ipv4", 00:39:23.357 "trsvcid": "$NVMF_PORT", 00:39:23.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.357 "hdgst": ${hdgst:-false}, 00:39:23.357 "ddgst": ${ddgst:-false} 00:39:23.357 }, 00:39:23.357 "method": "bdev_nvme_attach_controller" 00:39:23.357 } 00:39:23.357 EOF 00:39:23.357 )") 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:23.357 "params": { 00:39:23.357 "name": "Nvme0", 00:39:23.357 "trtype": "tcp", 00:39:23.357 "traddr": "10.0.0.2", 00:39:23.357 "adrfam": "ipv4", 00:39:23.357 "trsvcid": "4420", 00:39:23.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:23.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:23.357 "hdgst": false, 00:39:23.357 "ddgst": false 00:39:23.357 }, 00:39:23.357 "method": "bdev_nvme_attach_controller" 00:39:23.357 }' 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:23.357 17:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.624 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:23.624 fio-3.35 00:39:23.624 Starting 1 thread 00:39:35.884 00:39:35.884 filename0: (groupid=0, jobs=1): err= 0: pid=91332: Mon Jul 22 17:17:36 2024 00:39:35.884 read: IOPS=8311, BW=32.5MiB/s (34.0MB/s)(325MiB/10001msec) 00:39:35.884 slat (usec): min=3, max=360, avg= 9.53, stdev= 3.89 00:39:35.884 clat (usec): min=360, max=2603, avg=454.38, stdev=47.55 00:39:35.884 lat (usec): min=367, max=2660, avg=463.91, stdev=48.41 00:39:35.884 clat percentiles (usec): 00:39:35.884 | 1.00th=[ 375], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 420], 00:39:35.884 | 30.00th=[ 433], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 461], 00:39:35.884 | 70.00th=[ 474], 80.00th=[ 482], 90.00th=[ 498], 95.00th=[ 519], 00:39:35.884 | 99.00th=[ 594], 99.50th=[ 652], 99.90th=[ 783], 99.95th=[ 979], 00:39:35.884 | 99.99th=[ 1336] 00:39:35.884 bw ( KiB/s): min=30784, max=35680, per=99.85%, avg=33195.79, stdev=1315.08, samples=19 00:39:35.884 iops : min= 7696, max= 8920, avg=8298.95, stdev=328.77, samples=19 00:39:35.884 lat (usec) : 500=90.36%, 750=9.51%, 1000=0.09% 00:39:35.884 lat (msec) : 2=0.03%, 4=0.01% 00:39:35.884 cpu : usr=83.32%, sys=14.52%, ctx=177, majf=0, minf=1074 00:39:35.885 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:35.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.885 issued rwts: total=83124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.885 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:35.885 00:39:35.885 Run status group 0 (all jobs): 00:39:35.885 READ: bw=32.5MiB/s (34.0MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.0MB/s), io=325MiB (340MB), run=10001-10001msec 00:39:35.885 ----------------------------------------------------- 00:39:35.885 Suppressions used: 00:39:35.885 count bytes template 00:39:35.885 1 8 /usr/src/fio/parse.c 00:39:35.885 1 8 libtcmalloc_minimal.so 00:39:35.885 1 904 libcrypto.so 00:39:35.885 ----------------------------------------------------- 00:39:35.885 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.885 00:39:35.885 real 0m12.645s 00:39:35.885 user 0m10.460s 00:39:35.885 sys 0m1.865s 00:39:35.885 ************************************ 00:39:35.885 END TEST fio_dif_1_default 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:35.885 17:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:35.885 ************************************ 00:39:36.145 17:17:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:36.145 17:17:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:36.145 17:17:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:36.145 17:17:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:36.145 17:17:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:36.145 ************************************ 00:39:36.145 START TEST fio_dif_1_multi_subsystems 00:39:36.145 ************************************ 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.145 bdev_null0 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.145 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.146 [2024-07-22 17:17:37.577128] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.146 bdev_null1 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:36.146 { 00:39:36.146 "params": { 00:39:36.146 "name": "Nvme$subsystem", 00:39:36.146 "trtype": "$TEST_TRANSPORT", 00:39:36.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:36.146 "adrfam": "ipv4", 00:39:36.146 "trsvcid": "$NVMF_PORT", 00:39:36.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:36.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:36.146 "hdgst": ${hdgst:-false}, 00:39:36.146 "ddgst": ${ddgst:-false} 00:39:36.146 }, 00:39:36.146 "method": "bdev_nvme_attach_controller" 00:39:36.146 } 00:39:36.146 EOF 00:39:36.146 )") 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:36.146 { 00:39:36.146 "params": { 00:39:36.146 "name": "Nvme$subsystem", 00:39:36.146 "trtype": "$TEST_TRANSPORT", 00:39:36.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:36.146 "adrfam": "ipv4", 00:39:36.146 "trsvcid": "$NVMF_PORT", 00:39:36.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:36.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:36.146 "hdgst": ${hdgst:-false}, 00:39:36.146 "ddgst": ${ddgst:-false} 00:39:36.146 }, 00:39:36.146 "method": "bdev_nvme_attach_controller" 00:39:36.146 } 00:39:36.146 EOF 00:39:36.146 )") 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:36.146 "params": { 00:39:36.146 "name": "Nvme0", 00:39:36.146 "trtype": "tcp", 00:39:36.146 "traddr": "10.0.0.2", 00:39:36.146 "adrfam": "ipv4", 00:39:36.146 "trsvcid": "4420", 00:39:36.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:36.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:36.146 "hdgst": false, 00:39:36.146 "ddgst": false 00:39:36.146 }, 00:39:36.146 "method": "bdev_nvme_attach_controller" 00:39:36.146 },{ 00:39:36.146 "params": { 00:39:36.146 "name": "Nvme1", 00:39:36.146 "trtype": "tcp", 00:39:36.146 "traddr": "10.0.0.2", 00:39:36.146 "adrfam": "ipv4", 00:39:36.146 "trsvcid": "4420", 00:39:36.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:36.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:36.146 "hdgst": false, 00:39:36.146 "ddgst": false 00:39:36.146 }, 00:39:36.146 "method": "bdev_nvme_attach_controller" 00:39:36.146 }' 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:36.146 17:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.405 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:36.405 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:36.405 fio-3.35 00:39:36.405 Starting 2 threads 00:39:48.629 00:39:48.629 filename0: (groupid=0, jobs=1): err= 0: pid=91490: Mon Jul 22 17:17:48 2024 00:39:48.629 read: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(177MiB/10001msec) 00:39:48.629 slat (nsec): min=3981, max=92828, avg=15209.81, stdev=3678.16 00:39:48.629 clat (usec): min=402, max=2264, avg=841.03, stdev=57.50 00:39:48.629 lat (usec): min=410, max=2295, avg=856.24, stdev=58.67 00:39:48.629 clat percentiles (usec): 00:39:48.629 | 1.00th=[ 693], 5.00th=[ 742], 10.00th=[ 775], 20.00th=[ 799], 00:39:48.629 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 848], 60.00th=[ 857], 00:39:48.629 | 70.00th=[ 873], 80.00th=[ 889], 90.00th=[ 906], 95.00th=[ 922], 00:39:48.629 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1020], 99.95th=[ 1045], 00:39:48.629 | 99.99th=[ 1205] 00:39:48.629 bw ( KiB/s): min=17632, max=19168, per=50.09%, avg=18162.53, stdev=387.15, samples=19 00:39:48.629 iops : min= 4408, max= 4792, avg=4540.63, stdev=96.79, samples=19 00:39:48.629 lat (usec) : 500=0.12%, 750=5.91%, 1000=93.76% 00:39:48.629 lat (msec) : 2=0.20%, 4=0.01% 00:39:48.629 cpu : usr=89.97%, sys=8.84%, ctx=67, majf=0, minf=1074 00:39:48.629 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.629 issued rwts: total=45360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.629 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:48.629 filename1: (groupid=0, jobs=1): err= 0: pid=91491: Mon Jul 22 17:17:48 2024 00:39:48.629 read: IOPS=4529, BW=17.7MiB/s (18.6MB/s)(177MiB/10001msec) 00:39:48.629 slat (usec): min=5, max=355, avg=15.69, stdev= 5.94 00:39:48.629 clat (usec): min=424, max=2514, avg=840.07, stdev=60.83 00:39:48.629 lat (usec): min=432, max=2532, avg=855.76, stdev=61.70 00:39:48.629 clat percentiles (usec): 00:39:48.629 | 1.00th=[ 709], 5.00th=[ 742], 10.00th=[ 775], 20.00th=[ 799], 00:39:48.629 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 840], 60.00th=[ 857], 00:39:48.629 | 70.00th=[ 865], 80.00th=[ 881], 90.00th=[ 906], 95.00th=[ 922], 00:39:48.629 | 99.00th=[ 971], 99.50th=[ 1012], 99.90th=[ 1287], 99.95th=[ 1385], 00:39:48.629 | 99.99th=[ 1532] 00:39:48.629 bw ( KiB/s): min=17664, max=19168, per=50.01%, avg=18135.32, stdev=387.92, samples=19 00:39:48.629 iops : min= 4416, max= 4792, avg=4533.79, stdev=96.95, samples=19 00:39:48.629 lat (usec) : 500=0.06%, 750=5.65%, 1000=93.70% 00:39:48.629 lat (msec) : 2=0.58%, 4=0.01% 00:39:48.629 cpu : usr=88.90%, sys=9.39%, ctx=289, majf=0, minf=1072 00:39:48.629 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.629 issued rwts: total=45304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.629 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:48.629 00:39:48.629 Run status group 0 (all jobs): 00:39:48.629 READ: bw=35.4MiB/s (37.1MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=354MiB (371MB), run=10001-10001msec 00:39:48.887 ----------------------------------------------------- 00:39:48.887 Suppressions used: 00:39:48.887 count bytes template 00:39:48.887 2 16 /usr/src/fio/parse.c 00:39:48.887 1 8 libtcmalloc_minimal.so 00:39:48.887 1 904 libcrypto.so 00:39:48.887 ----------------------------------------------------- 00:39:48.887 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.144 00:39:49.144 real 0m13.006s 00:39:49.144 user 0m20.405s 00:39:49.144 sys 0m2.263s 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:49.144 17:17:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:49.144 ************************************ 00:39:49.144 END TEST fio_dif_1_multi_subsystems 00:39:49.144 ************************************ 00:39:49.144 17:17:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:49.144 17:17:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:49.144 17:17:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:49.144 17:17:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:49.144 17:17:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:49.144 ************************************ 00:39:49.144 START TEST fio_dif_rand_params 00:39:49.144 ************************************ 00:39:49.144 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:49.145 bdev_null0 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:49.145 [2024-07-22 17:17:50.640162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:49.145 { 00:39:49.145 "params": { 00:39:49.145 "name": "Nvme$subsystem", 00:39:49.145 "trtype": "$TEST_TRANSPORT", 00:39:49.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:49.145 "adrfam": "ipv4", 00:39:49.145 "trsvcid": "$NVMF_PORT", 00:39:49.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:49.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:49.145 "hdgst": ${hdgst:-false}, 00:39:49.145 "ddgst": ${ddgst:-false} 00:39:49.145 }, 00:39:49.145 "method": "bdev_nvme_attach_controller" 00:39:49.145 } 00:39:49.145 EOF 00:39:49.145 )") 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:49.145 "params": { 00:39:49.145 "name": "Nvme0", 00:39:49.145 "trtype": "tcp", 00:39:49.145 "traddr": "10.0.0.2", 00:39:49.145 "adrfam": "ipv4", 00:39:49.145 "trsvcid": "4420", 00:39:49.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:49.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:49.145 "hdgst": false, 00:39:49.145 "ddgst": false 00:39:49.145 }, 00:39:49.145 "method": "bdev_nvme_attach_controller" 00:39:49.145 }' 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:49.145 17:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:49.403 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:49.403 ... 00:39:49.403 fio-3.35 00:39:49.403 Starting 3 threads 00:39:56.011 00:39:56.011 filename0: (groupid=0, jobs=1): err= 0: pid=91657: Mon Jul 22 17:17:56 2024 00:39:56.011 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5007msec) 00:39:56.011 slat (nsec): min=5418, max=65269, avg=22776.60, stdev=9365.59 00:39:56.011 clat (usec): min=11103, max=23679, avg=13016.79, stdev=1311.86 00:39:56.011 lat (usec): min=11118, max=23714, avg=13039.57, stdev=1313.13 00:39:56.011 clat percentiles (usec): 00:39:56.012 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12256], 20.00th=[12387], 00:39:56.012 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:39:56.012 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13435], 95.00th=[17171], 00:39:56.012 | 99.00th=[17695], 99.50th=[18482], 99.90th=[23725], 99.95th=[23725], 00:39:56.012 | 99.99th=[23725] 00:39:56.012 bw ( KiB/s): min=23808, max=30720, per=33.32%, avg=29337.60, stdev=2073.44, samples=10 00:39:56.012 iops : min= 186, max= 240, avg=229.20, stdev=16.20, samples=10 00:39:56.012 lat (msec) : 20=99.74%, 50=0.26% 00:39:56.012 cpu : usr=89.83%, sys=9.61%, ctx=19, majf=0, minf=1075 00:39:56.012 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:56.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.012 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:56.012 filename0: (groupid=0, jobs=1): err= 0: pid=91658: Mon Jul 22 17:17:56 2024 00:39:56.012 read: IOPS=229, BW=28.6MiB/s (30.0MB/s)(143MiB/5004msec) 00:39:56.012 slat (usec): min=6, max=126, avg=22.60, stdev= 9.73 00:39:56.012 clat (usec): min=11116, max=23650, avg=13042.11, stdev=1380.07 00:39:56.012 lat (usec): min=11130, max=23685, avg=13064.71, stdev=1381.42 00:39:56.012 clat percentiles (usec): 00:39:56.012 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12256], 20.00th=[12387], 00:39:56.012 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:39:56.012 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13566], 95.00th=[17171], 00:39:56.012 | 99.00th=[18220], 99.50th=[21627], 99.90th=[23725], 99.95th=[23725], 00:39:56.012 | 99.99th=[23725] 00:39:56.012 bw ( KiB/s): min=23808, max=31488, per=33.23%, avg=29260.80, stdev=2155.57, samples=10 00:39:56.012 iops : min= 186, max= 246, avg=228.60, stdev=16.84, samples=10 00:39:56.012 lat (msec) : 20=99.48%, 50=0.52% 00:39:56.012 cpu : usr=89.67%, sys=9.39%, ctx=98, majf=0, minf=1075 00:39:56.012 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:56.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.012 issued rwts: total=1146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:56.012 filename0: (groupid=0, jobs=1): err= 0: pid=91659: Mon Jul 22 17:17:56 2024 00:39:56.012 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5007msec) 00:39:56.012 slat (nsec): min=5531, max=62605, avg=22869.49, stdev=7822.80 00:39:56.012 clat (usec): min=9929, max=23671, avg=13016.57, stdev=1321.18 00:39:56.012 lat (usec): min=9956, max=23700, avg=13039.44, stdev=1322.53 00:39:56.012 clat percentiles (usec): 00:39:56.012 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12256], 20.00th=[12387], 00:39:56.012 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:39:56.012 | 70.00th=[13042], 80.00th=[13042], 90.00th=[13566], 95.00th=[17171], 00:39:56.012 | 99.00th=[17695], 99.50th=[18482], 99.90th=[23725], 99.95th=[23725], 00:39:56.012 | 99.99th=[23725] 00:39:56.012 bw ( KiB/s): min=23855, max=30720, per=33.33%, avg=29342.30, stdev=2059.52, samples=10 00:39:56.012 iops : min= 186, max= 240, avg=229.20, stdev=16.20, samples=10 00:39:56.012 lat (msec) : 10=0.26%, 20=99.48%, 50=0.26% 00:39:56.012 cpu : usr=88.61%, sys=10.81%, ctx=9, majf=0, minf=1072 00:39:56.012 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:56.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.012 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:56.012 00:39:56.012 Run status group 0 (all jobs): 00:39:56.012 READ: bw=86.0MiB/s (90.2MB/s), 28.6MiB/s-28.7MiB/s (30.0MB/s-30.1MB/s), io=431MiB (451MB), run=5004-5007msec 00:39:56.981 ----------------------------------------------------- 00:39:56.981 Suppressions used: 00:39:56.981 count bytes template 00:39:56.981 5 44 /usr/src/fio/parse.c 00:39:56.981 1 8 libtcmalloc_minimal.so 00:39:56.981 1 904 libcrypto.so 00:39:56.981 ----------------------------------------------------- 00:39:56.981 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 bdev_null0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 [2024-07-22 17:17:58.469214] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 bdev_null1 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 bdev_null2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:56.981 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:56.982 { 00:39:56.982 "params": { 00:39:56.982 "name": "Nvme$subsystem", 00:39:56.982 "trtype": "$TEST_TRANSPORT", 00:39:56.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:56.982 "adrfam": "ipv4", 00:39:56.982 "trsvcid": "$NVMF_PORT", 00:39:56.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:56.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:56.982 "hdgst": ${hdgst:-false}, 00:39:56.982 "ddgst": ${ddgst:-false} 00:39:56.982 }, 00:39:56.982 "method": "bdev_nvme_attach_controller" 00:39:56.982 } 00:39:56.982 EOF 00:39:56.982 )") 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:56.982 { 00:39:56.982 "params": { 00:39:56.982 "name": "Nvme$subsystem", 00:39:56.982 "trtype": "$TEST_TRANSPORT", 00:39:56.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:56.982 "adrfam": "ipv4", 00:39:56.982 "trsvcid": "$NVMF_PORT", 00:39:56.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:56.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:56.982 "hdgst": ${hdgst:-false}, 00:39:56.982 "ddgst": ${ddgst:-false} 00:39:56.982 }, 00:39:56.982 "method": "bdev_nvme_attach_controller" 00:39:56.982 } 00:39:56.982 EOF 00:39:56.982 )") 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:56.982 { 00:39:56.982 "params": { 00:39:56.982 "name": "Nvme$subsystem", 00:39:56.982 "trtype": "$TEST_TRANSPORT", 00:39:56.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:56.982 "adrfam": "ipv4", 00:39:56.982 "trsvcid": "$NVMF_PORT", 00:39:56.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:56.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:56.982 "hdgst": ${hdgst:-false}, 00:39:56.982 "ddgst": ${ddgst:-false} 00:39:56.982 }, 00:39:56.982 "method": "bdev_nvme_attach_controller" 00:39:56.982 } 00:39:56.982 EOF 00:39:56.982 )") 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:56.982 17:17:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:56.982 "params": { 00:39:56.982 "name": "Nvme0", 00:39:56.982 "trtype": "tcp", 00:39:56.982 "traddr": "10.0.0.2", 00:39:56.982 "adrfam": "ipv4", 00:39:56.982 "trsvcid": "4420", 00:39:56.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:56.982 "hdgst": false, 00:39:56.982 "ddgst": false 00:39:56.982 }, 00:39:56.982 "method": "bdev_nvme_attach_controller" 00:39:56.982 },{ 00:39:56.982 "params": { 00:39:56.982 "name": "Nvme1", 00:39:56.982 "trtype": "tcp", 00:39:56.982 "traddr": "10.0.0.2", 00:39:56.982 "adrfam": "ipv4", 00:39:56.982 "trsvcid": "4420", 00:39:56.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:56.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:56.982 "hdgst": false, 00:39:56.982 "ddgst": false 00:39:56.982 }, 00:39:56.982 "method": "bdev_nvme_attach_controller" 00:39:56.982 },{ 00:39:56.982 "params": { 00:39:56.982 "name": "Nvme2", 00:39:56.982 "trtype": "tcp", 00:39:56.982 "traddr": "10.0.0.2", 00:39:56.982 "adrfam": "ipv4", 00:39:56.982 "trsvcid": "4420", 00:39:56.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:56.982 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:56.982 "hdgst": false, 00:39:56.982 "ddgst": false 00:39:56.982 }, 00:39:56.982 "method": "bdev_nvme_attach_controller" 00:39:56.982 }' 00:39:57.241 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:57.241 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:57.241 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:39:57.241 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:57.241 17:17:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:57.241 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:57.241 ... 00:39:57.241 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:57.241 ... 00:39:57.241 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:57.241 ... 00:39:57.241 fio-3.35 00:39:57.241 Starting 24 threads 00:40:09.434 00:40:09.434 filename0: (groupid=0, jobs=1): err= 0: pid=91764: Mon Jul 22 17:18:10 2024 00:40:09.434 read: IOPS=183, BW=734KiB/s (751kB/s)(7340KiB/10006msec) 00:40:09.434 slat (usec): min=5, max=8031, avg=23.89, stdev=209.32 00:40:09.434 clat (msec): min=2, max=161, avg=87.09, stdev=27.73 00:40:09.434 lat (msec): min=2, max=161, avg=87.11, stdev=27.74 00:40:09.434 clat percentiles (msec): 00:40:09.434 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 69], 00:40:09.434 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 90], 00:40:09.434 | 70.00th=[ 96], 80.00th=[ 113], 90.00th=[ 124], 95.00th=[ 133], 00:40:09.434 | 99.00th=[ 140], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 163], 00:40:09.434 | 99.99th=[ 163] 00:40:09.434 bw ( KiB/s): min= 510, max= 912, per=3.88%, avg=699.53, stdev=116.42, samples=19 00:40:09.434 iops : min= 127, max= 228, avg=174.79, stdev=29.17, samples=19 00:40:09.434 lat (msec) : 4=0.33%, 10=1.91%, 20=1.42%, 50=2.18%, 100=67.08% 00:40:09.434 lat (msec) : 250=27.08% 00:40:09.434 cpu : usr=37.10%, sys=3.21%, ctx=1147, majf=0, minf=1074 00:40:09.434 IO depths : 1=0.1%, 2=3.3%, 4=13.0%, 8=69.6%, 16=14.1%, 32=0.0%, >=64=0.0% 00:40:09.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 complete : 0=0.0%, 4=90.7%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.434 filename0: (groupid=0, jobs=1): err= 0: pid=91765: Mon Jul 22 17:18:10 2024 00:40:09.434 read: IOPS=188, BW=752KiB/s (770kB/s)(7572KiB/10065msec) 00:40:09.434 slat (usec): min=5, max=11049, avg=36.11, stdev=352.46 00:40:09.434 clat (msec): min=35, max=147, avg=84.82, stdev=22.57 00:40:09.434 lat (msec): min=35, max=147, avg=84.85, stdev=22.57 00:40:09.434 clat percentiles (msec): 00:40:09.434 | 1.00th=[ 43], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 64], 00:40:09.434 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 89], 00:40:09.434 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 120], 95.00th=[ 129], 00:40:09.434 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:40:09.434 | 99.99th=[ 148] 00:40:09.434 bw ( KiB/s): min= 560, max= 920, per=4.16%, avg=750.40, stdev=90.91, samples=20 00:40:09.434 iops : min= 140, max= 230, avg=187.55, stdev=22.76, samples=20 00:40:09.434 lat (msec) : 50=3.80%, 100=75.12%, 250=21.08% 00:40:09.434 cpu : usr=41.26%, sys=3.20%, ctx=1344, majf=0, minf=1073 00:40:09.434 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:40:09.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.434 filename0: (groupid=0, jobs=1): err= 0: pid=91766: Mon Jul 22 17:18:10 2024 00:40:09.434 read: IOPS=200, BW=803KiB/s (822kB/s)(8032KiB/10006msec) 00:40:09.434 slat (usec): min=3, max=8045, avg=24.84, stdev=200.36 00:40:09.434 clat (msec): min=2, max=153, avg=79.58, stdev=26.67 00:40:09.434 lat (msec): min=2, max=153, avg=79.60, stdev=26.67 00:40:09.434 clat percentiles (msec): 00:40:09.434 | 1.00th=[ 8], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 61], 00:40:09.434 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 85], 00:40:09.434 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 120], 95.00th=[ 130], 00:40:09.434 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 153], 00:40:09.434 | 99.99th=[ 155] 00:40:09.434 bw ( KiB/s): min= 614, max= 918, per=4.27%, avg=770.68, stdev=100.69, samples=19 00:40:09.434 iops : min= 153, max= 229, avg=192.58, stdev=25.22, samples=19 00:40:09.434 lat (msec) : 4=0.30%, 10=2.19%, 20=0.85%, 50=8.62%, 100=70.52% 00:40:09.434 lat (msec) : 250=17.53% 00:40:09.434 cpu : usr=31.26%, sys=2.33%, ctx=890, majf=0, minf=1073 00:40:09.434 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:40:09.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.434 filename0: (groupid=0, jobs=1): err= 0: pid=91767: Mon Jul 22 17:18:10 2024 00:40:09.434 read: IOPS=197, BW=790KiB/s (809kB/s)(7976KiB/10094msec) 00:40:09.434 slat (usec): min=7, max=10095, avg=41.55, stdev=366.16 00:40:09.434 clat (msec): min=6, max=167, avg=80.57, stdev=26.63 00:40:09.434 lat (msec): min=6, max=167, avg=80.61, stdev=26.62 00:40:09.434 clat percentiles (msec): 00:40:09.434 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 49], 20.00th=[ 59], 00:40:09.434 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 86], 00:40:09.434 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 120], 95.00th=[ 129], 00:40:09.434 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 169], 00:40:09.434 | 99.99th=[ 169] 00:40:09.434 bw ( KiB/s): min= 574, max= 1400, per=4.38%, avg=790.90, stdev=180.64, samples=20 00:40:09.434 iops : min= 143, max= 350, avg=197.65, stdev=45.21, samples=20 00:40:09.434 lat (msec) : 10=1.60%, 20=1.60%, 50=7.67%, 100=69.86%, 250=19.26% 00:40:09.434 cpu : usr=36.81%, sys=2.80%, ctx=1201, majf=0, minf=1073 00:40:09.434 IO depths : 1=0.2%, 2=0.6%, 4=1.8%, 8=81.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:40:09.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.434 filename0: (groupid=0, jobs=1): err= 0: pid=91768: Mon Jul 22 17:18:10 2024 00:40:09.434 read: IOPS=184, BW=736KiB/s (754kB/s)(7376KiB/10017msec) 00:40:09.434 slat (usec): min=5, max=8036, avg=32.80, stdev=309.49 00:40:09.434 clat (msec): min=34, max=177, avg=86.72, stdev=25.98 00:40:09.434 lat (msec): min=34, max=177, avg=86.75, stdev=25.98 00:40:09.434 clat percentiles (msec): 00:40:09.434 | 1.00th=[ 46], 5.00th=[ 52], 10.00th=[ 58], 20.00th=[ 64], 00:40:09.434 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 87], 00:40:09.434 | 70.00th=[ 93], 80.00th=[ 108], 90.00th=[ 130], 95.00th=[ 132], 00:40:09.434 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:40:09.434 | 99.99th=[ 178] 00:40:09.434 bw ( KiB/s): min= 496, max= 928, per=4.03%, avg=726.47, stdev=125.60, samples=19 00:40:09.434 iops : min= 124, max= 232, avg=181.58, stdev=31.37, samples=19 00:40:09.434 lat (msec) : 50=3.80%, 100=72.67%, 250=23.54% 00:40:09.434 cpu : usr=32.43%, sys=2.63%, ctx=1051, majf=0, minf=1073 00:40:09.434 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=75.3%, 16=14.7%, 32=0.0%, >=64=0.0% 00:40:09.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 complete : 0=0.0%, 4=89.0%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.434 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.434 filename0: (groupid=0, jobs=1): err= 0: pid=91769: Mon Jul 22 17:18:10 2024 00:40:09.434 read: IOPS=206, BW=825KiB/s (845kB/s)(8308KiB/10071msec) 00:40:09.434 slat (usec): min=5, max=8054, avg=31.09, stdev=316.96 00:40:09.434 clat (usec): min=1748, max=165663, avg=77238.80, stdev=34235.53 00:40:09.434 lat (usec): min=1769, max=165679, avg=77269.89, stdev=34242.70 00:40:09.434 clat percentiles (usec): 00:40:09.434 | 1.00th=[ 1909], 5.00th=[ 5669], 10.00th=[ 10945], 20.00th=[ 58459], 00:40:09.434 | 30.00th=[ 68682], 40.00th=[ 73925], 50.00th=[ 83362], 60.00th=[ 85459], 00:40:09.434 | 70.00th=[ 93848], 80.00th=[104334], 90.00th=[120062], 95.00th=[130548], 00:40:09.434 | 99.00th=[141558], 99.50th=[141558], 99.90th=[156238], 99.95th=[156238], 00:40:09.434 | 99.99th=[164627] 00:40:09.434 bw ( KiB/s): min= 582, max= 2539, per=4.58%, avg=825.55, stdev=416.76, samples=20 00:40:09.435 iops : min= 145, max= 634, avg=206.30, stdev=104.05, samples=20 00:40:09.435 lat (msec) : 2=2.26%, 4=1.69%, 10=5.97%, 20=2.31%, 50=3.27% 00:40:09.435 lat (msec) : 100=63.99%, 250=20.51% 00:40:09.435 cpu : usr=31.49%, sys=2.56%, ctx=950, majf=0, minf=1075 00:40:09.435 IO depths : 1=0.6%, 2=1.9%, 4=5.3%, 8=76.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.435 filename0: (groupid=0, jobs=1): err= 0: pid=91770: Mon Jul 22 17:18:10 2024 00:40:09.435 read: IOPS=189, BW=760KiB/s (778kB/s)(7652KiB/10070msec) 00:40:09.435 slat (usec): min=5, max=8037, avg=28.38, stdev=274.81 00:40:09.435 clat (msec): min=39, max=166, avg=84.02, stdev=22.51 00:40:09.435 lat (msec): min=39, max=166, avg=84.05, stdev=22.52 00:40:09.435 clat percentiles (msec): 00:40:09.435 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 62], 00:40:09.435 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 86], 00:40:09.435 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 120], 95.00th=[ 130], 00:40:09.435 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 167], 00:40:09.435 | 99.99th=[ 167] 00:40:09.435 bw ( KiB/s): min= 536, max= 928, per=4.21%, avg=758.45, stdev=94.59, samples=20 00:40:09.435 iops : min= 134, max= 232, avg=189.55, stdev=23.67, samples=20 00:40:09.435 lat (msec) : 50=4.60%, 100=75.22%, 250=20.18% 00:40:09.435 cpu : usr=37.16%, sys=2.84%, ctx=978, majf=0, minf=1072 00:40:09.435 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=1913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.435 filename0: (groupid=0, jobs=1): err= 0: pid=91771: Mon Jul 22 17:18:10 2024 00:40:09.435 read: IOPS=186, BW=745KiB/s (763kB/s)(7456KiB/10006msec) 00:40:09.435 slat (usec): min=5, max=8036, avg=21.73, stdev=185.87 00:40:09.435 clat (msec): min=7, max=145, avg=85.73, stdev=25.65 00:40:09.435 lat (msec): min=7, max=145, avg=85.75, stdev=25.66 00:40:09.435 clat percentiles (msec): 00:40:09.435 | 1.00th=[ 11], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 67], 00:40:09.435 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 88], 00:40:09.435 | 70.00th=[ 94], 80.00th=[ 113], 90.00th=[ 125], 95.00th=[ 130], 00:40:09.435 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 146], 00:40:09.435 | 99.99th=[ 146] 00:40:09.435 bw ( KiB/s): min= 510, max= 920, per=4.00%, avg=721.58, stdev=116.85, samples=19 00:40:09.435 iops : min= 127, max= 230, avg=180.37, stdev=29.26, samples=19 00:40:09.435 lat (msec) : 10=0.64%, 20=1.39%, 50=2.41%, 100=70.76%, 250=24.79% 00:40:09.435 cpu : usr=41.70%, sys=3.10%, ctx=1510, majf=0, minf=1073 00:40:09.435 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.4%, 16=14.5%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=89.6%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.435 filename1: (groupid=0, jobs=1): err= 0: pid=91772: Mon Jul 22 17:18:10 2024 00:40:09.435 read: IOPS=160, BW=642KiB/s (657kB/s)(6456KiB/10060msec) 00:40:09.435 slat (usec): min=5, max=4031, avg=22.13, stdev=141.50 00:40:09.435 clat (msec): min=59, max=161, avg=99.47, stdev=20.88 00:40:09.435 lat (msec): min=59, max=161, avg=99.50, stdev=20.89 00:40:09.435 clat percentiles (msec): 00:40:09.435 | 1.00th=[ 64], 5.00th=[ 73], 10.00th=[ 80], 20.00th=[ 82], 00:40:09.435 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 101], 00:40:09.435 | 70.00th=[ 111], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 136], 00:40:09.435 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 163], 00:40:09.435 | 99.99th=[ 163] 00:40:09.435 bw ( KiB/s): min= 396, max= 768, per=3.54%, avg=637.95, stdev=105.86, samples=20 00:40:09.435 iops : min= 99, max= 192, avg=159.45, stdev=26.43, samples=20 00:40:09.435 lat (msec) : 100=60.90%, 250=39.10% 00:40:09.435 cpu : usr=39.88%, sys=3.13%, ctx=1322, majf=0, minf=1075 00:40:09.435 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=1614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.435 filename1: (groupid=0, jobs=1): err= 0: pid=91773: Mon Jul 22 17:18:10 2024 00:40:09.435 read: IOPS=191, BW=764KiB/s (783kB/s)(7692KiB/10064msec) 00:40:09.435 slat (usec): min=5, max=8052, avg=29.84, stdev=274.31 00:40:09.435 clat (msec): min=5, max=156, avg=83.40, stdev=25.17 00:40:09.435 lat (msec): min=5, max=156, avg=83.43, stdev=25.17 00:40:09.435 clat percentiles (msec): 00:40:09.435 | 1.00th=[ 11], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 67], 00:40:09.435 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 85], 00:40:09.435 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 131], 00:40:09.435 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 157], 00:40:09.435 | 99.99th=[ 157] 00:40:09.435 bw ( KiB/s): min= 585, max= 1280, per=4.25%, avg=765.40, stdev=148.99, samples=20 00:40:09.435 iops : min= 146, max= 320, avg=191.25, stdev=37.28, samples=20 00:40:09.435 lat (msec) : 10=0.94%, 20=1.56%, 50=5.62%, 100=71.66%, 250=20.23% 00:40:09.435 cpu : usr=31.17%, sys=2.74%, ctx=893, majf=0, minf=1074 00:40:09.435 IO depths : 1=0.2%, 2=0.8%, 4=2.9%, 8=79.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=1923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.435 filename1: (groupid=0, jobs=1): err= 0: pid=91774: Mon Jul 22 17:18:10 2024 00:40:09.435 read: IOPS=175, BW=701KiB/s (717kB/s)(7020KiB/10020msec) 00:40:09.435 slat (usec): min=3, max=8045, avg=36.35, stdev=358.05 00:40:09.435 clat (msec): min=36, max=189, avg=91.11, stdev=25.02 00:40:09.435 lat (msec): min=36, max=189, avg=91.15, stdev=25.03 00:40:09.435 clat percentiles (msec): 00:40:09.435 | 1.00th=[ 48], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 71], 00:40:09.435 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 92], 00:40:09.435 | 70.00th=[ 102], 80.00th=[ 114], 90.00th=[ 127], 95.00th=[ 133], 00:40:09.435 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 190], 99.95th=[ 190], 00:40:09.435 | 99.99th=[ 190] 00:40:09.435 bw ( KiB/s): min= 512, max= 880, per=3.87%, avg=698.00, stdev=119.05, samples=20 00:40:09.435 iops : min= 128, max= 220, avg=174.50, stdev=29.76, samples=20 00:40:09.435 lat (msec) : 50=2.17%, 100=66.67%, 250=31.17% 00:40:09.435 cpu : usr=39.48%, sys=3.02%, ctx=1142, majf=0, minf=1075 00:40:09.435 IO depths : 1=0.1%, 2=3.4%, 4=13.4%, 8=69.2%, 16=14.0%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=90.7%, 8=6.3%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=1755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.435 filename1: (groupid=0, jobs=1): err= 0: pid=91775: Mon Jul 22 17:18:10 2024 00:40:09.435 read: IOPS=159, BW=637KiB/s (653kB/s)(6400KiB/10042msec) 00:40:09.435 slat (usec): min=6, max=8040, avg=45.36, stdev=396.17 00:40:09.435 clat (msec): min=42, max=179, avg=99.77, stdev=24.12 00:40:09.435 lat (msec): min=42, max=179, avg=99.81, stdev=24.14 00:40:09.435 clat percentiles (msec): 00:40:09.435 | 1.00th=[ 45], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 81], 00:40:09.435 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 103], 00:40:09.435 | 70.00th=[ 110], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 140], 00:40:09.435 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 180], 00:40:09.435 | 99.99th=[ 180] 00:40:09.435 bw ( KiB/s): min= 400, max= 880, per=3.54%, avg=638.85, stdev=118.35, samples=20 00:40:09.435 iops : min= 100, max= 220, avg=159.65, stdev=29.62, samples=20 00:40:09.435 lat (msec) : 50=1.00%, 100=58.25%, 250=40.75% 00:40:09.435 cpu : usr=42.15%, sys=3.11%, ctx=1466, majf=0, minf=1075 00:40:09.435 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.435 filename1: (groupid=0, jobs=1): err= 0: pid=91776: Mon Jul 22 17:18:10 2024 00:40:09.435 read: IOPS=200, BW=802KiB/s (821kB/s)(8072KiB/10068msec) 00:40:09.435 slat (nsec): min=5529, max=74166, avg=18006.33, stdev=7061.69 00:40:09.435 clat (msec): min=9, max=144, avg=79.57, stdev=24.61 00:40:09.435 lat (msec): min=9, max=144, avg=79.59, stdev=24.61 00:40:09.435 clat percentiles (msec): 00:40:09.435 | 1.00th=[ 21], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 58], 00:40:09.435 | 30.00th=[ 64], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 85], 00:40:09.435 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 115], 95.00th=[ 128], 00:40:09.435 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:40:09.435 | 99.99th=[ 146] 00:40:09.435 bw ( KiB/s): min= 608, max= 1160, per=4.45%, avg=802.90, stdev=133.89, samples=20 00:40:09.435 iops : min= 152, max= 290, avg=200.70, stdev=33.50, samples=20 00:40:09.435 lat (msec) : 10=0.79%, 50=7.98%, 100=73.74%, 250=17.49% 00:40:09.435 cpu : usr=42.54%, sys=3.54%, ctx=1350, majf=0, minf=1073 00:40:09.435 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:40:09.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.435 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.436 filename1: (groupid=0, jobs=1): err= 0: pid=91777: Mon Jul 22 17:18:10 2024 00:40:09.436 read: IOPS=194, BW=776KiB/s (795kB/s)(7800KiB/10047msec) 00:40:09.436 slat (usec): min=5, max=5033, avg=29.41, stdev=214.38 00:40:09.436 clat (msec): min=35, max=168, avg=82.15, stdev=23.36 00:40:09.436 lat (msec): min=35, max=168, avg=82.18, stdev=23.36 00:40:09.436 clat percentiles (msec): 00:40:09.436 | 1.00th=[ 44], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 61], 00:40:09.436 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 85], 00:40:09.436 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 130], 00:40:09.436 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 169], 00:40:09.436 | 99.99th=[ 169] 00:40:09.436 bw ( KiB/s): min= 616, max= 968, per=4.30%, avg=775.20, stdev=103.27, samples=20 00:40:09.436 iops : min= 154, max= 242, avg=193.80, stdev=25.82, samples=20 00:40:09.436 lat (msec) : 50=4.67%, 100=77.49%, 250=17.85% 00:40:09.436 cpu : usr=43.06%, sys=3.38%, ctx=1370, majf=0, minf=1075 00:40:09.436 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:40:09.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.436 filename1: (groupid=0, jobs=1): err= 0: pid=91778: Mon Jul 22 17:18:10 2024 00:40:09.436 read: IOPS=195, BW=780KiB/s (799kB/s)(7856KiB/10066msec) 00:40:09.436 slat (usec): min=5, max=8035, avg=26.26, stdev=221.77 00:40:09.436 clat (msec): min=29, max=143, avg=81.70, stdev=23.76 00:40:09.436 lat (msec): min=29, max=143, avg=81.73, stdev=23.76 00:40:09.436 clat percentiles (msec): 00:40:09.436 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 59], 00:40:09.436 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 85], 00:40:09.436 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 120], 95.00th=[ 129], 00:40:09.436 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 144], 00:40:09.436 | 99.99th=[ 144] 00:40:09.436 bw ( KiB/s): min= 616, max= 1017, per=4.33%, avg=781.25, stdev=108.79, samples=20 00:40:09.436 iops : min= 154, max= 254, avg=195.25, stdev=27.18, samples=20 00:40:09.436 lat (msec) : 50=8.86%, 100=72.30%, 250=18.84% 00:40:09.436 cpu : usr=34.68%, sys=2.73%, ctx=1104, majf=0, minf=1075 00:40:09.436 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:40:09.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.436 filename1: (groupid=0, jobs=1): err= 0: pid=91779: Mon Jul 22 17:18:10 2024 00:40:09.436 read: IOPS=192, BW=771KiB/s (789kB/s)(7724KiB/10024msec) 00:40:09.436 slat (usec): min=5, max=8037, avg=39.74, stdev=407.59 00:40:09.436 clat (msec): min=35, max=162, avg=82.88, stdev=23.14 00:40:09.436 lat (msec): min=35, max=162, avg=82.91, stdev=23.16 00:40:09.436 clat percentiles (msec): 00:40:09.436 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 57], 20.00th=[ 61], 00:40:09.436 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 85], 00:40:09.436 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 131], 00:40:09.436 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 163], 00:40:09.436 | 99.99th=[ 163] 00:40:09.436 bw ( KiB/s): min= 616, max= 888, per=4.26%, avg=768.05, stdev=90.68, samples=20 00:40:09.436 iops : min= 154, max= 222, avg=192.00, stdev=22.66, samples=20 00:40:09.436 lat (msec) : 50=6.89%, 100=75.04%, 250=18.07% 00:40:09.436 cpu : usr=30.40%, sys=2.82%, ctx=912, majf=0, minf=1075 00:40:09.436 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:40:09.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.436 filename2: (groupid=0, jobs=1): err= 0: pid=91780: Mon Jul 22 17:18:10 2024 00:40:09.436 read: IOPS=186, BW=747KiB/s (765kB/s)(7516KiB/10059msec) 00:40:09.436 slat (usec): min=5, max=8044, avg=29.05, stdev=271.67 00:40:09.436 clat (msec): min=41, max=169, avg=85.38, stdev=22.11 00:40:09.436 lat (msec): min=41, max=169, avg=85.41, stdev=22.11 00:40:09.436 clat percentiles (msec): 00:40:09.436 | 1.00th=[ 48], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 65], 00:40:09.436 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 88], 00:40:09.436 | 70.00th=[ 93], 80.00th=[ 104], 90.00th=[ 120], 95.00th=[ 131], 00:40:09.436 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 169], 99.95th=[ 169], 00:40:09.436 | 99.99th=[ 169] 00:40:09.436 bw ( KiB/s): min= 560, max= 920, per=4.13%, avg=745.40, stdev=97.28, samples=20 00:40:09.436 iops : min= 140, max= 230, avg=186.30, stdev=24.32, samples=20 00:40:09.436 lat (msec) : 50=3.09%, 100=74.45%, 250=22.46% 00:40:09.436 cpu : usr=38.92%, sys=2.84%, ctx=1196, majf=0, minf=1073 00:40:09.436 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:40:09.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 complete : 0=0.0%, 4=88.1%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 issued rwts: total=1879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.436 filename2: (groupid=0, jobs=1): err= 0: pid=91781: Mon Jul 22 17:18:10 2024 00:40:09.436 read: IOPS=192, BW=769KiB/s (787kB/s)(7736KiB/10063msec) 00:40:09.436 slat (usec): min=3, max=8033, avg=28.14, stdev=234.68 00:40:09.436 clat (msec): min=14, max=149, avg=82.90, stdev=24.53 00:40:09.436 lat (msec): min=14, max=149, avg=82.93, stdev=24.54 00:40:09.436 clat percentiles (msec): 00:40:09.436 | 1.00th=[ 16], 5.00th=[ 49], 10.00th=[ 55], 20.00th=[ 63], 00:40:09.436 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 86], 00:40:09.436 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 122], 95.00th=[ 129], 00:40:09.436 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 150], 99.95th=[ 150], 00:40:09.436 | 99.99th=[ 150] 00:40:09.436 bw ( KiB/s): min= 616, max= 1152, per=4.27%, avg=769.70, stdev=128.53, samples=20 00:40:09.436 iops : min= 154, max= 288, avg=192.40, stdev=32.16, samples=20 00:40:09.436 lat (msec) : 20=2.48%, 50=3.62%, 100=73.63%, 250=20.27% 00:40:09.436 cpu : usr=38.41%, sys=3.04%, ctx=1319, majf=0, minf=1075 00:40:09.436 IO depths : 1=0.2%, 2=1.3%, 4=4.7%, 8=78.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:40:09.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 issued rwts: total=1934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.436 filename2: (groupid=0, jobs=1): err= 0: pid=91782: Mon Jul 22 17:18:10 2024 00:40:09.436 read: IOPS=190, BW=764KiB/s (782kB/s)(7684KiB/10059msec) 00:40:09.436 slat (usec): min=7, max=4046, avg=20.71, stdev=92.14 00:40:09.436 clat (msec): min=35, max=143, avg=83.61, stdev=22.16 00:40:09.436 lat (msec): min=35, max=143, avg=83.63, stdev=22.15 00:40:09.436 clat percentiles (msec): 00:40:09.436 | 1.00th=[ 48], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 64], 00:40:09.436 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 86], 00:40:09.436 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 121], 95.00th=[ 129], 00:40:09.436 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:40:09.436 | 99.99th=[ 144] 00:40:09.436 bw ( KiB/s): min= 560, max= 928, per=4.22%, avg=760.70, stdev=97.69, samples=20 00:40:09.436 iops : min= 140, max= 232, avg=190.10, stdev=24.42, samples=20 00:40:09.436 lat (msec) : 50=4.11%, 100=76.73%, 250=19.16% 00:40:09.436 cpu : usr=37.87%, sys=2.81%, ctx=1282, majf=0, minf=1075 00:40:09.436 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:40:09.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 issued rwts: total=1921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.436 filename2: (groupid=0, jobs=1): err= 0: pid=91783: Mon Jul 22 17:18:10 2024 00:40:09.436 read: IOPS=196, BW=786KiB/s (805kB/s)(7868KiB/10004msec) 00:40:09.436 slat (usec): min=4, max=8041, avg=26.00, stdev=213.43 00:40:09.436 clat (msec): min=6, max=143, avg=81.21, stdev=24.93 00:40:09.436 lat (msec): min=6, max=143, avg=81.24, stdev=24.93 00:40:09.436 clat percentiles (msec): 00:40:09.436 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:40:09.436 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 85], 00:40:09.436 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 118], 95.00th=[ 127], 00:40:09.436 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:40:09.436 | 99.99th=[ 144] 00:40:09.436 bw ( KiB/s): min= 614, max= 920, per=4.23%, avg=762.05, stdev=88.03, samples=19 00:40:09.436 iops : min= 153, max= 230, avg=190.42, stdev=22.00, samples=19 00:40:09.436 lat (msec) : 10=1.12%, 20=1.17%, 50=6.71%, 100=73.72%, 250=17.29% 00:40:09.436 cpu : usr=31.11%, sys=2.26%, ctx=953, majf=0, minf=1074 00:40:09.436 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=80.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:40:09.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.436 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.437 filename2: (groupid=0, jobs=1): err= 0: pid=91784: Mon Jul 22 17:18:10 2024 00:40:09.437 read: IOPS=187, BW=751KiB/s (769kB/s)(7564KiB/10066msec) 00:40:09.437 slat (usec): min=8, max=7060, avg=22.71, stdev=162.51 00:40:09.437 clat (msec): min=9, max=152, avg=84.87, stdev=25.70 00:40:09.437 lat (msec): min=9, max=152, avg=84.89, stdev=25.69 00:40:09.437 clat percentiles (msec): 00:40:09.437 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 68], 00:40:09.437 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 87], 00:40:09.437 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 131], 00:40:09.437 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 153], 00:40:09.437 | 99.99th=[ 153] 00:40:09.437 bw ( KiB/s): min= 593, max= 1264, per=4.16%, avg=750.00, stdev=143.46, samples=20 00:40:09.437 iops : min= 148, max= 316, avg=187.40, stdev=35.90, samples=20 00:40:09.437 lat (msec) : 10=0.85%, 20=2.54%, 50=2.70%, 100=69.70%, 250=24.22% 00:40:09.437 cpu : usr=32.26%, sys=2.88%, ctx=1045, majf=0, minf=1075 00:40:09.437 IO depths : 1=0.2%, 2=1.5%, 4=5.4%, 8=77.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:40:09.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 issued rwts: total=1891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.437 filename2: (groupid=0, jobs=1): err= 0: pid=91785: Mon Jul 22 17:18:10 2024 00:40:09.437 read: IOPS=188, BW=755KiB/s (773kB/s)(7572KiB/10035msec) 00:40:09.437 slat (usec): min=5, max=8035, avg=30.23, stdev=318.83 00:40:09.437 clat (msec): min=45, max=173, avg=84.59, stdev=24.00 00:40:09.437 lat (msec): min=45, max=174, avg=84.62, stdev=24.00 00:40:09.437 clat percentiles (msec): 00:40:09.437 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 62], 00:40:09.437 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:40:09.437 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 121], 95.00th=[ 131], 00:40:09.437 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 174], 00:40:09.437 | 99.99th=[ 174] 00:40:09.437 bw ( KiB/s): min= 552, max= 920, per=4.18%, avg=753.10, stdev=108.73, samples=20 00:40:09.437 iops : min= 138, max= 230, avg=188.25, stdev=27.21, samples=20 00:40:09.437 lat (msec) : 50=5.60%, 100=75.44%, 250=18.96% 00:40:09.437 cpu : usr=31.79%, sys=2.58%, ctx=938, majf=0, minf=1074 00:40:09.437 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.6%, 16=15.1%, 32=0.0%, >=64=0.0% 00:40:09.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.437 filename2: (groupid=0, jobs=1): err= 0: pid=91786: Mon Jul 22 17:18:10 2024 00:40:09.437 read: IOPS=194, BW=776KiB/s (795kB/s)(7776KiB/10020msec) 00:40:09.437 slat (usec): min=5, max=8066, avg=47.26, stdev=481.05 00:40:09.437 clat (msec): min=29, max=154, avg=82.22, stdev=23.33 00:40:09.437 lat (msec): min=29, max=154, avg=82.27, stdev=23.32 00:40:09.437 clat percentiles (msec): 00:40:09.437 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 61], 00:40:09.437 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 85], 00:40:09.437 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 121], 95.00th=[ 129], 00:40:09.437 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:40:09.437 | 99.99th=[ 155] 00:40:09.437 bw ( KiB/s): min= 616, max= 920, per=4.28%, avg=772.40, stdev=100.21, samples=20 00:40:09.437 iops : min= 154, max= 230, avg=193.10, stdev=25.05, samples=20 00:40:09.437 lat (msec) : 50=7.51%, 100=72.99%, 250=19.50% 00:40:09.437 cpu : usr=32.04%, sys=2.25%, ctx=1015, majf=0, minf=1072 00:40:09.437 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:40:09.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.437 filename2: (groupid=0, jobs=1): err= 0: pid=91787: Mon Jul 22 17:18:10 2024 00:40:09.437 read: IOPS=185, BW=742KiB/s (760kB/s)(7428KiB/10009msec) 00:40:09.437 slat (usec): min=4, max=8045, avg=23.23, stdev=186.38 00:40:09.437 clat (msec): min=10, max=164, avg=86.10, stdev=25.09 00:40:09.437 lat (msec): min=10, max=164, avg=86.12, stdev=25.09 00:40:09.437 clat percentiles (msec): 00:40:09.437 | 1.00th=[ 32], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 63], 00:40:09.437 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:40:09.437 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 125], 95.00th=[ 132], 00:40:09.437 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:40:09.437 | 99.99th=[ 165] 00:40:09.437 bw ( KiB/s): min= 495, max= 920, per=4.03%, avg=727.58, stdev=120.51, samples=19 00:40:09.437 iops : min= 123, max= 230, avg=181.84, stdev=30.19, samples=19 00:40:09.437 lat (msec) : 20=0.86%, 50=4.47%, 100=70.06%, 250=24.61% 00:40:09.437 cpu : usr=31.01%, sys=2.54%, ctx=948, majf=0, minf=1075 00:40:09.437 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.8%, 16=14.8%, 32=0.0%, >=64=0.0% 00:40:09.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.437 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:09.437 00:40:09.437 Run status group 0 (all jobs): 00:40:09.437 READ: bw=17.6MiB/s (18.5MB/s), 637KiB/s-825KiB/s (653kB/s-845kB/s), io=178MiB (186MB), run=10004-10094msec 00:40:10.388 ----------------------------------------------------- 00:40:10.388 Suppressions used: 00:40:10.388 count bytes template 00:40:10.388 45 402 /usr/src/fio/parse.c 00:40:10.388 1 8 libtcmalloc_minimal.so 00:40:10.388 1 904 libcrypto.so 00:40:10.388 ----------------------------------------------------- 00:40:10.388 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 bdev_null0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.388 [2024-07-22 17:18:11.986734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.388 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.389 bdev_null1 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.389 17:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:10.647 { 00:40:10.647 "params": { 00:40:10.647 "name": "Nvme$subsystem", 00:40:10.647 "trtype": "$TEST_TRANSPORT", 00:40:10.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.647 "adrfam": "ipv4", 00:40:10.647 "trsvcid": "$NVMF_PORT", 00:40:10.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.647 "hdgst": ${hdgst:-false}, 00:40:10.647 "ddgst": ${ddgst:-false} 00:40:10.647 }, 00:40:10.647 "method": "bdev_nvme_attach_controller" 00:40:10.647 } 00:40:10.647 EOF 00:40:10.647 )") 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:10.647 { 00:40:10.647 "params": { 00:40:10.647 "name": "Nvme$subsystem", 00:40:10.647 "trtype": "$TEST_TRANSPORT", 00:40:10.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.647 "adrfam": "ipv4", 00:40:10.647 "trsvcid": "$NVMF_PORT", 00:40:10.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.647 "hdgst": ${hdgst:-false}, 00:40:10.647 "ddgst": ${ddgst:-false} 00:40:10.647 }, 00:40:10.647 "method": "bdev_nvme_attach_controller" 00:40:10.647 } 00:40:10.647 EOF 00:40:10.647 )") 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:10.647 "params": { 00:40:10.647 "name": "Nvme0", 00:40:10.647 "trtype": "tcp", 00:40:10.647 "traddr": "10.0.0.2", 00:40:10.647 "adrfam": "ipv4", 00:40:10.647 "trsvcid": "4420", 00:40:10.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:10.647 "hdgst": false, 00:40:10.647 "ddgst": false 00:40:10.647 }, 00:40:10.647 "method": "bdev_nvme_attach_controller" 00:40:10.647 },{ 00:40:10.647 "params": { 00:40:10.647 "name": "Nvme1", 00:40:10.647 "trtype": "tcp", 00:40:10.647 "traddr": "10.0.0.2", 00:40:10.647 "adrfam": "ipv4", 00:40:10.647 "trsvcid": "4420", 00:40:10.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:10.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:10.647 "hdgst": false, 00:40:10.647 "ddgst": false 00:40:10.647 }, 00:40:10.647 "method": "bdev_nvme_attach_controller" 00:40:10.647 }' 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:10.647 17:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:10.905 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:10.905 ... 00:40:10.905 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:10.905 ... 00:40:10.905 fio-3.35 00:40:10.905 Starting 4 threads 00:40:17.474 00:40:17.474 filename0: (groupid=0, jobs=1): err= 0: pid=91929: Mon Jul 22 17:18:18 2024 00:40:17.474 read: IOPS=2157, BW=16.9MiB/s (17.7MB/s)(84.3MiB/5003msec) 00:40:17.474 slat (usec): min=5, max=305, avg=14.08, stdev= 6.77 00:40:17.474 clat (usec): min=749, max=7832, avg=3671.01, stdev=1216.82 00:40:17.474 lat (usec): min=759, max=7848, avg=3685.09, stdev=1216.79 00:40:17.474 clat percentiles (usec): 00:40:17.474 | 1.00th=[ 1467], 5.00th=[ 1565], 10.00th=[ 1598], 20.00th=[ 2376], 00:40:17.474 | 30.00th=[ 3195], 40.00th=[ 3458], 50.00th=[ 4080], 60.00th=[ 4359], 00:40:17.474 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5276], 00:40:17.474 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 7046], 99.95th=[ 7111], 00:40:17.474 | 99.99th=[ 7767] 00:40:17.474 bw ( KiB/s): min=15760, max=18656, per=29.53%, avg=17342.22, stdev=1174.51, samples=9 00:40:17.474 iops : min= 1970, max= 2332, avg=2167.78, stdev=146.81, samples=9 00:40:17.474 lat (usec) : 750=0.02%, 1000=0.15% 00:40:17.474 lat (msec) : 2=15.29%, 4=33.23%, 10=51.31% 00:40:17.474 cpu : usr=89.48%, sys=9.02%, ctx=83, majf=0, minf=1075 00:40:17.474 IO depths : 1=0.1%, 2=2.3%, 4=62.8%, 8=34.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 issued rwts: total=10795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:17.474 filename0: (groupid=0, jobs=1): err= 0: pid=91930: Mon Jul 22 17:18:18 2024 00:40:17.474 read: IOPS=1735, BW=13.6MiB/s (14.2MB/s)(67.8MiB/5001msec) 00:40:17.474 slat (nsec): min=4722, max=58750, avg=17402.40, stdev=5665.09 00:40:17.474 clat (usec): min=1135, max=9213, avg=4547.38, stdev=935.35 00:40:17.474 lat (usec): min=1151, max=9233, avg=4564.78, stdev=935.22 00:40:17.474 clat percentiles (usec): 00:40:17.474 | 1.00th=[ 1860], 5.00th=[ 2474], 10.00th=[ 3032], 20.00th=[ 4228], 00:40:17.474 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:40:17.474 | 70.00th=[ 4752], 80.00th=[ 5080], 90.00th=[ 5473], 95.00th=[ 5800], 00:40:17.474 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 8979], 00:40:17.474 | 99.99th=[ 9241] 00:40:17.474 bw ( KiB/s): min=11568, max=17712, per=23.54%, avg=13824.00, stdev=1640.29, samples=9 00:40:17.474 iops : min= 1446, max= 2214, avg=1728.00, stdev=205.04, samples=9 00:40:17.474 lat (msec) : 2=1.30%, 4=13.18%, 10=85.52% 00:40:17.474 cpu : usr=90.06%, sys=9.16%, ctx=10, majf=0, minf=1073 00:40:17.474 IO depths : 1=0.1%, 2=18.3%, 4=53.9%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 issued rwts: total=8679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:17.474 filename1: (groupid=0, jobs=1): err= 0: pid=91931: Mon Jul 22 17:18:18 2024 00:40:17.474 read: IOPS=1726, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5001msec) 00:40:17.474 slat (usec): min=5, max=102, avg=19.22, stdev= 5.42 00:40:17.474 clat (usec): min=1089, max=7607, avg=4563.34, stdev=797.05 00:40:17.474 lat (usec): min=1097, max=7626, avg=4582.56, stdev=796.96 00:40:17.474 clat percentiles (usec): 00:40:17.474 | 1.00th=[ 1696], 5.00th=[ 2900], 10.00th=[ 3621], 20.00th=[ 4293], 00:40:17.474 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:40:17.474 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5604], 00:40:17.474 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7242], 99.95th=[ 7242], 00:40:17.474 | 99.99th=[ 7635] 00:40:17.474 bw ( KiB/s): min=12784, max=15328, per=23.61%, avg=13864.22, stdev=837.21, samples=9 00:40:17.474 iops : min= 1598, max= 1916, avg=1733.00, stdev=104.67, samples=9 00:40:17.474 lat (msec) : 2=1.65%, 4=11.20%, 10=87.15% 00:40:17.474 cpu : usr=90.02%, sys=8.98%, ctx=68, majf=0, minf=1074 00:40:17.474 IO depths : 1=0.1%, 2=19.0%, 4=53.6%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 issued rwts: total=8632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:17.474 filename1: (groupid=0, jobs=1): err= 0: pid=91932: Mon Jul 22 17:18:18 2024 00:40:17.474 read: IOPS=1722, BW=13.5MiB/s (14.1MB/s)(67.3MiB/5001msec) 00:40:17.474 slat (nsec): min=4013, max=53390, avg=18290.66, stdev=5044.59 00:40:17.474 clat (usec): min=1283, max=10799, avg=4577.71, stdev=805.68 00:40:17.474 lat (usec): min=1298, max=10818, avg=4596.00, stdev=805.55 00:40:17.474 clat percentiles (usec): 00:40:17.474 | 1.00th=[ 1795], 5.00th=[ 2933], 10.00th=[ 3654], 20.00th=[ 4293], 00:40:17.474 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:40:17.474 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5604], 00:40:17.474 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7570], 99.95th=[10552], 00:40:17.474 | 99.99th=[10814] 00:40:17.474 bw ( KiB/s): min=12672, max=15376, per=23.58%, avg=13847.11, stdev=866.26, samples=9 00:40:17.474 iops : min= 1584, max= 1922, avg=1730.89, stdev=108.28, samples=9 00:40:17.474 lat (msec) : 2=1.46%, 4=11.13%, 10=87.31%, 20=0.09% 00:40:17.474 cpu : usr=90.14%, sys=9.04%, ctx=10, majf=0, minf=1075 00:40:17.474 IO depths : 1=0.1%, 2=19.0%, 4=53.6%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.474 issued rwts: total=8616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:17.474 00:40:17.474 Run status group 0 (all jobs): 00:40:17.474 READ: bw=57.3MiB/s (60.1MB/s), 13.5MiB/s-16.9MiB/s (14.1MB/s-17.7MB/s), io=287MiB (301MB), run=5001-5003msec 00:40:18.473 ----------------------------------------------------- 00:40:18.473 Suppressions used: 00:40:18.473 count bytes template 00:40:18.473 6 52 /usr/src/fio/parse.c 00:40:18.473 1 8 libtcmalloc_minimal.so 00:40:18.473 1 904 libcrypto.so 00:40:18.473 ----------------------------------------------------- 00:40:18.473 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.473 00:40:18.473 real 0m29.360s 00:40:18.473 user 2m6.737s 00:40:18.473 sys 0m11.572s 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:18.473 17:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.473 ************************************ 00:40:18.473 END TEST fio_dif_rand_params 00:40:18.473 ************************************ 00:40:18.473 17:18:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:18.473 17:18:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:18.473 17:18:20 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:18.473 17:18:20 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:18.473 17:18:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:18.473 ************************************ 00:40:18.473 START TEST fio_dif_digest 00:40:18.473 ************************************ 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:18.473 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:18.474 bdev_null0 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:18.474 [2024-07-22 17:18:20.063351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:18.474 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:18.474 { 00:40:18.474 "params": { 00:40:18.474 "name": "Nvme$subsystem", 00:40:18.474 "trtype": "$TEST_TRANSPORT", 00:40:18.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.474 "adrfam": "ipv4", 00:40:18.474 "trsvcid": "$NVMF_PORT", 00:40:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.474 "hdgst": ${hdgst:-false}, 00:40:18.474 "ddgst": ${ddgst:-false} 00:40:18.474 }, 00:40:18.474 "method": "bdev_nvme_attach_controller" 00:40:18.474 } 00:40:18.474 EOF 00:40:18.474 )") 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:18.748 "params": { 00:40:18.748 "name": "Nvme0", 00:40:18.748 "trtype": "tcp", 00:40:18.748 "traddr": "10.0.0.2", 00:40:18.748 "adrfam": "ipv4", 00:40:18.748 "trsvcid": "4420", 00:40:18.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:18.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:18.748 "hdgst": true, 00:40:18.748 "ddgst": true 00:40:18.748 }, 00:40:18.748 "method": "bdev_nvme_attach_controller" 00:40:18.748 }' 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:18.748 17:18:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.748 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:18.748 ... 00:40:18.748 fio-3.35 00:40:18.748 Starting 3 threads 00:40:30.947 00:40:30.947 filename0: (groupid=0, jobs=1): err= 0: pid=92048: Mon Jul 22 17:18:31 2024 00:40:30.947 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10009msec) 00:40:30.947 slat (nsec): min=5596, max=76203, avg=25075.32, stdev=11034.71 00:40:30.947 clat (usec): min=8602, max=24789, avg=14987.77, stdev=1134.69 00:40:30.947 lat (usec): min=8607, max=24813, avg=15012.84, stdev=1135.09 00:40:30.947 clat percentiles (usec): 00:40:30.947 | 1.00th=[13566], 5.00th=[14222], 10.00th=[14353], 20.00th=[14484], 00:40:30.947 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[14877], 00:40:30.947 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[16057], 00:40:30.947 | 99.00th=[20055], 99.50th=[21627], 99.90th=[24773], 99.95th=[24773], 00:40:30.947 | 99.99th=[24773] 00:40:30.947 bw ( KiB/s): min=19968, max=26880, per=33.35%, avg=25502.89, stdev=1460.02, samples=19 00:40:30.947 iops : min= 156, max= 210, avg=199.21, stdev=11.39, samples=19 00:40:30.947 lat (msec) : 10=0.15%, 20=99.00%, 50=0.85% 00:40:30.947 cpu : usr=89.69%, sys=9.71%, ctx=16, majf=0, minf=1062 00:40:30.947 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.947 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:30.947 filename0: (groupid=0, jobs=1): err= 0: pid=92049: Mon Jul 22 17:18:31 2024 00:40:30.947 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10003msec) 00:40:30.947 slat (nsec): min=9124, max=76594, avg=26244.27, stdev=10560.52 00:40:30.947 clat (usec): min=13065, max=24785, avg=15000.06, stdev=1113.91 00:40:30.947 lat (usec): min=13081, max=24809, avg=15026.30, stdev=1114.32 00:40:30.947 clat percentiles (usec): 00:40:30.947 | 1.00th=[13698], 5.00th=[14222], 10.00th=[14353], 20.00th=[14484], 00:40:30.947 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[14877], 00:40:30.947 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[16057], 00:40:30.947 | 99.00th=[20055], 99.50th=[21627], 99.90th=[24773], 99.95th=[24773], 00:40:30.947 | 99.99th=[24773] 00:40:30.947 bw ( KiB/s): min=19968, max=26112, per=33.31%, avg=25467.89, stdev=1408.11, samples=19 00:40:30.947 iops : min= 156, max= 204, avg=198.95, stdev=11.00, samples=19 00:40:30.947 lat (msec) : 20=99.10%, 50=0.90% 00:40:30.947 cpu : usr=90.19%, sys=9.15%, ctx=46, majf=0, minf=1072 00:40:30.947 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.947 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:30.947 filename0: (groupid=0, jobs=1): err= 0: pid=92050: Mon Jul 22 17:18:31 2024 00:40:30.947 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10004msec) 00:40:30.947 slat (nsec): min=5366, max=70722, avg=25987.16, stdev=10516.91 00:40:30.947 clat (usec): min=13061, max=24784, avg=15002.26, stdev=1116.02 00:40:30.947 lat (usec): min=13077, max=24807, avg=15028.25, stdev=1116.35 00:40:30.947 clat percentiles (usec): 00:40:30.947 | 1.00th=[13698], 5.00th=[14222], 10.00th=[14353], 20.00th=[14484], 00:40:30.947 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[14877], 00:40:30.947 | 70.00th=[15008], 80.00th=[15008], 90.00th=[15401], 95.00th=[16057], 00:40:30.947 | 99.00th=[20055], 99.50th=[21627], 99.90th=[24773], 99.95th=[24773], 00:40:30.947 | 99.99th=[24773] 00:40:30.947 bw ( KiB/s): min=19968, max=26112, per=33.30%, avg=25465.26, stdev=1408.31, samples=19 00:40:30.947 iops : min= 156, max= 204, avg=198.95, stdev=11.00, samples=19 00:40:30.947 lat (msec) : 20=99.20%, 50=0.80% 00:40:30.947 cpu : usr=89.95%, sys=9.47%, ctx=16, majf=0, minf=1074 00:40:30.947 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.947 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:30.947 00:40:30.947 Run status group 0 (all jobs): 00:40:30.947 READ: bw=74.7MiB/s (78.3MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=747MiB (784MB), run=10003-10009msec 00:40:31.885 ----------------------------------------------------- 00:40:31.885 Suppressions used: 00:40:31.885 count bytes template 00:40:31.885 5 44 /usr/src/fio/parse.c 00:40:31.885 1 8 libtcmalloc_minimal.so 00:40:31.885 1 904 libcrypto.so 00:40:31.885 ----------------------------------------------------- 00:40:31.885 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.885 00:40:31.885 real 0m13.168s 00:40:31.885 user 0m29.668s 00:40:31.885 sys 0m3.288s 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.885 17:18:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:31.885 ************************************ 00:40:31.885 END TEST fio_dif_digest 00:40:31.885 ************************************ 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:31.885 17:18:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:31.885 17:18:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:31.885 rmmod nvme_tcp 00:40:31.885 rmmod nvme_fabrics 00:40:31.885 rmmod nvme_keyring 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 91268 ']' 00:40:31.885 17:18:33 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 91268 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 91268 ']' 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 91268 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91268 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:31.885 killing process with pid 91268 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91268' 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@967 -- # kill 91268 00:40:31.885 17:18:33 nvmf_dif -- common/autotest_common.sh@972 -- # wait 91268 00:40:33.819 17:18:34 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:33.819 17:18:34 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:33.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:33.819 Waiting for block devices as requested 00:40:33.819 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:34.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:34.077 17:18:35 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:34.077 17:18:35 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:34.077 17:18:35 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:34.077 17:18:35 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:34.077 17:18:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.077 17:18:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:34.077 17:18:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.077 17:18:35 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:40:34.077 00:40:34.077 real 1m12.875s 00:40:34.077 user 4m8.540s 00:40:34.077 sys 0m24.172s 00:40:34.077 17:18:35 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:34.077 17:18:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:34.077 ************************************ 00:40:34.077 END TEST nvmf_dif 00:40:34.077 ************************************ 00:40:34.334 17:18:35 -- common/autotest_common.sh@1142 -- # return 0 00:40:34.334 17:18:35 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:34.334 17:18:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:34.334 17:18:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:34.334 17:18:35 -- common/autotest_common.sh@10 -- # set +x 00:40:34.334 ************************************ 00:40:34.334 START TEST nvmf_abort_qd_sizes 00:40:34.334 ************************************ 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:34.334 * Looking for test storage... 00:40:34.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.334 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:40:34.335 Cannot find device "nvmf_tgt_br" 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:40:34.335 Cannot find device "nvmf_tgt_br2" 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:40:34.335 Cannot find device "nvmf_tgt_br" 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:40:34.335 Cannot find device "nvmf_tgt_br2" 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:40:34.335 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:40:34.593 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:40:34.593 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:34.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:34.593 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:40:34.593 17:18:35 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:34.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:34.593 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:40:34.593 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:34.594 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:34.852 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:40:34.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:34.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:40:34.852 00:40:34.852 --- 10.0.0.2 ping statistics --- 00:40:34.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:34.852 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:40:34.852 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:40:34.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:34.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:40:34.852 00:40:34.852 --- 10.0.0.3 ping statistics --- 00:40:34.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:34.852 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:40:34.852 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:34.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:34.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:40:34.852 00:40:34.852 --- 10.0.0.1 ping statistics --- 00:40:34.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:34.852 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:40:34.852 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:34.852 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:40:34.852 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:34.852 17:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:35.418 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:35.418 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:35.675 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=92678 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 92678 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 92678 ']' 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:35.676 17:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:35.676 [2024-07-22 17:18:37.289553] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:35.676 [2024-07-22 17:18:37.289687] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.934 [2024-07-22 17:18:37.465536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:36.501 [2024-07-22 17:18:37.819611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:36.501 [2024-07-22 17:18:37.819677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:36.501 [2024-07-22 17:18:37.819693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:36.502 [2024-07-22 17:18:37.819709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:36.502 [2024-07-22 17:18:37.819724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:36.502 [2024-07-22 17:18:37.819871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.502 [2024-07-22 17:18:37.820216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:36.502 [2024-07-22 17:18:37.820615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:36.502 [2024-07-22 17:18:37.820617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.760 [2024-07-22 17:18:38.144510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:40:36.760 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:37.019 17:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:37.019 ************************************ 00:40:37.019 START TEST spdk_target_abort 00:40:37.019 ************************************ 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:37.019 spdk_targetn1 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:37.019 [2024-07-22 17:18:38.516896] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:37.019 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:37.020 [2024-07-22 17:18:38.553284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:37.020 17:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:40.303 Initializing NVMe Controllers 00:40:40.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:40.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:40.303 Initialization complete. Launching workers. 00:40:40.303 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10073, failed: 0 00:40:40.303 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1039, failed to submit 9034 00:40:40.303 success 757, unsuccess 282, failed 0 00:40:40.561 17:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:40.561 17:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:43.860 Initializing NVMe Controllers 00:40:43.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:43.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:43.860 Initialization complete. Launching workers. 00:40:43.860 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8655, failed: 0 00:40:43.860 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1124, failed to submit 7531 00:40:43.860 success 357, unsuccess 767, failed 0 00:40:43.860 17:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:43.861 17:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:48.043 Initializing NVMe Controllers 00:40:48.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:48.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:48.043 Initialization complete. Launching workers. 00:40:48.043 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27614, failed: 0 00:40:48.043 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2087, failed to submit 25527 00:40:48.043 success 408, unsuccess 1679, failed 0 00:40:48.043 17:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:48.043 17:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:48.043 17:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:48.043 17:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:48.043 17:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:48.043 17:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:48.043 17:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 92678 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 92678 ']' 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 92678 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92678 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92678' 00:40:48.043 killing process with pid 92678 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 92678 00:40:48.043 17:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 92678 00:40:48.977 ************************************ 00:40:48.977 END TEST spdk_target_abort 00:40:48.977 ************************************ 00:40:48.977 00:40:48.977 real 0m12.080s 00:40:48.977 user 0m46.489s 00:40:48.977 sys 0m2.782s 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:48.977 17:18:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:40:48.977 17:18:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:48.977 17:18:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:48.977 17:18:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:48.977 17:18:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:48.977 ************************************ 00:40:48.977 START TEST kernel_target_abort 00:40:48.977 ************************************ 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:48.977 17:18:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:49.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:49.543 Waiting for block devices as requested 00:40:49.543 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:49.543 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:50.108 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:50.108 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:50.108 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:50.108 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:40:50.108 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:40:50.109 No valid GPT data, bailing 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:40:50.109 No valid GPT data, bailing 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:40:50.109 No valid GPT data, bailing 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:40:50.109 No valid GPT data, bailing 00:40:50.109 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c89efdb4-7a1d-46d1-bbb7-b7f038000b45 --hostid=c89efdb4-7a1d-46d1-bbb7-b7f038000b45 -a 10.0.0.1 -t tcp -s 4420 00:40:50.368 00:40:50.368 Discovery Log Number of Records 2, Generation counter 2 00:40:50.368 =====Discovery Log Entry 0====== 00:40:50.368 trtype: tcp 00:40:50.368 adrfam: ipv4 00:40:50.368 subtype: current discovery subsystem 00:40:50.368 treq: not specified, sq flow control disable supported 00:40:50.368 portid: 1 00:40:50.368 trsvcid: 4420 00:40:50.368 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:50.368 traddr: 10.0.0.1 00:40:50.368 eflags: none 00:40:50.368 sectype: none 00:40:50.368 =====Discovery Log Entry 1====== 00:40:50.368 trtype: tcp 00:40:50.368 adrfam: ipv4 00:40:50.368 subtype: nvme subsystem 00:40:50.368 treq: not specified, sq flow control disable supported 00:40:50.368 portid: 1 00:40:50.368 trsvcid: 4420 00:40:50.368 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:50.368 traddr: 10.0.0.1 00:40:50.368 eflags: none 00:40:50.368 sectype: none 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:50.368 17:18:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:53.671 Initializing NVMe Controllers 00:40:53.671 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:53.671 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:53.671 Initialization complete. Launching workers. 00:40:53.671 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34464, failed: 0 00:40:53.671 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34464, failed to submit 0 00:40:53.671 success 0, unsuccess 34464, failed 0 00:40:53.671 17:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:53.671 17:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:54.303 Cancelling nested steps due to timeout 00:40:54.306 Sending interrupt signal to process 00:40:54.612 script returned exit code 255 00:40:54.617 [Pipeline] } 00:40:54.643 [Pipeline] // timeout 00:40:54.652 [Pipeline] } 00:40:54.672 [Pipeline] // stage 00:40:54.680 [Pipeline] } 00:40:54.691 Timeout has been exceeded 00:40:54.691 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 832846f1-738e-48bd-b967-a0390980584b 00:40:54.691 Setting overall build result to ABORTED 00:40:54.713 [Pipeline] // catchError 00:40:54.723 [Pipeline] stage 00:40:54.725 [Pipeline] { (Stop VM) 00:40:54.737 [Pipeline] sh 00:40:55.014 + vagrant halt 00:40:59.203 ==> default: Halting domain... 00:41:05.778 [Pipeline] sh 00:41:06.055 + vagrant destroy -f 00:41:10.238 ==> default: Removing domain... 00:41:10.252 [Pipeline] sh 00:41:10.532 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:41:10.580 [Pipeline] } 00:41:10.595 [Pipeline] // stage 00:41:10.600 [Pipeline] } 00:41:10.617 [Pipeline] // dir 00:41:10.624 [Pipeline] } 00:41:10.666 [Pipeline] // wrap 00:41:10.673 [Pipeline] } 00:41:10.689 [Pipeline] // catchError 00:41:10.699 [Pipeline] stage 00:41:10.701 [Pipeline] { (Epilogue) 00:41:10.716 [Pipeline] sh 00:41:10.996 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:13.536 [Pipeline] catchError 00:41:13.538 [Pipeline] { 00:41:13.554 [Pipeline] sh 00:41:13.834 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:14.092 Artifacts sizes are good 00:41:14.102 [Pipeline] } 00:41:14.123 [Pipeline] // catchError 00:41:14.135 [Pipeline] archiveArtifacts 00:41:14.142 Archiving artifacts 00:41:14.388 [Pipeline] cleanWs 00:41:14.396 [WS-CLEANUP] Deleting project workspace... 00:41:14.396 [WS-CLEANUP] Deferred wipeout is used... 00:41:14.402 [WS-CLEANUP] done 00:41:14.404 [Pipeline] } 00:41:14.417 [Pipeline] // stage 00:41:14.423 [Pipeline] } 00:41:14.437 [Pipeline] // node 00:41:14.442 [Pipeline] End of Pipeline 00:41:14.477 Finished: ABORTED